Jobs
Interviews

16012 Kafka Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JR0125187 Manager, Solution Engineering– Hyderabad, India Are you ready to join a global organization that helps diverse teams stay at the forefront of technology and innovation? How about offering up your skills in a global business that is committed to moving money for better? Join Western Union as Manager, Solution Engineering. Western Union powers your pursuit. As a Manager, you will manage our Cross-channel platform engineering team and contribute towards the new API’s development for Enterprise level initiative in Compliance orchestration platform. This role is important for expanding our existing product capabilities, improving customer experience and accelerating the launch of new products and services. You will be participating in designing and building scalable, high-performance APIs that drive innovation and efficiency across our KYC and Compliance ecosystem. You would be performing below - Role Responsibilities Planning & Delivery: Lead the planning and execution of program phases, ensuring alignment with strategic objectives and timelines Partner with functional leaders to define and prioritize program deliverables, ensuring a focus on delivering measurable business value Oversee the development and tracking of transition plans Develop and deliver clear, concise, and timely program status updates to all stakeholders, including executive-level reports Identify and address communication gaps, proactively manage issues, and provide support to teams navigating conflicting priorities Provide expert advice, coaching, and mentorship to leads and team members. Mentor Developers in the team while fostering a collaborative team environment. Collaborate with stakeholders, across Product and Technology, to define and deliver technical solutions. Hands-on & Driving architecture simplification and consolidation of platforms to flexible and scalable & complaint solutions. Role Requirements Strong experience in managing the teams with a focus on API development and microservices architectures implementations. 12+ years of progressive experience in program management, with a proven track record of leading large-scale, complex transformation programs Strong background experience in Java, Spring Boot, Microservices, REST API, Sprint Batch, Core Java, Kafka. Event Driven Architecture experience. Strong knowledge of AWS and experience in developing cloud-based Java applications in AWS. Strong hands-on experience with Kubernetes for container orchestration, including cluster management and application deployment. Proven ability to lead and motivate cross-functional, global teams in a matrixed environment Excellent communication, presentation, and stakeholder management skills Experience working in agile, waterfall, and hybrid project management environments Proficiency in project management tools, such as Jira, Jira Align, and Confluence PMP, Agile, SAFe, or other relevant certifications are highly preferred Experience with onsite and offshore teams Translate/analyze requirements and document and communicate detailed solution approach by using suitable tools, techniques, templates and diagram. Experience with large scale workforce transformation Strong troubleshooting, problem-solving, and diagnostic skills. Experience in managing the KYC, Compliance, Vendor Integration in a Banking/Payment environment will be plus. Strong communication skills with ability to interact with internal and external partners globally. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India-specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Checkup Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-08-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for a Senior Manager to lead our K8s and VM challenges in the management plane team. The management plane is a web-based application crafted to provide our storage customers the capabilities to handle and monitor our distributed storage infrastructure. Our team is continually dedicated to acquiring and implementing ground breaking technologies to overcome obstacles and innovate solutions for enhancing our ability to manage large clusters of machines efficiently. What You Will Be Doing Manage a team of Senior developers. Design, develop and maintain Kubernetes operators and our Container Storage Interface (CSI) plugin. Develop a web-based solution that manages, operates and monitors our distributed storage. Work closely with other teams to define and implement new APIs. What We Need To See B.Sc., M.Sc. or Ph.D. in Computer Science, or related field, or equivalent experience. 12+ years of experience in web development ( both client and server ) and 3+ years of experience in people management Proven experience with Kubernetes (K8s), including developing or maintaining operators and/or CSI plugins. Experience scripting with Python, Bash or similar At least 5 years of experience working in a Linux OS environment Ways To Stand Out From The Crowd NodeJS for the server side: dominant modules are async & express Kafka, MongoDB, K8s Kata Containers, KubeVirt JavaScript frameworks: React, jQuery, c3js Scripting: Python and Bash as well as Git and Linux NVIDIA has continuously reinvented itself over two decades. NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. This is our life’s work — to amplify human imagination and intelligence. With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most brilliant and talented people in the world working for us and, due to extraordinary growth, our elite engineering teams are fast-growing fast. If you're a creative and autonomous manager with a sincere real passion for technology, we want to hear from you. JR1999682

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

Location: Dallas, TX [Remote] Job Type: Full-time Contract Rate: USD $30/hour Job Summary: We are seeking an experienced IBM App Connect Enterprise (ACE) Developer to join our integration team at a Fortune 50 client. The ideal candidate will be responsible for designing, developing, and maintaining integration solutions using IBM ACE (formerly IBM Integration Bus). This role requires deep knowledge of integration patterns, APIs, and enterprise messaging systems, as well as hands-on experience in building scalable and secure integration services. Key Responsibilities: Design, develop, test, and deploy integration flows using IBM App Connect Enterprise (v11 or v12) Implement message flows, sub-flows, ESQL transformations, and REST/SOAP web services Integrate with various backend systems such as SAP, Salesforce, and databases using protocols like HTTP, MQ, JDBC, and FTP Develop and manage message models using DFDL, XML, JSON, and XSD Build reusable assets, templates, and patterns to accelerate integration delivery Collaborate with architecture, security, and DevOps teams to ensure solutions meet enterprise standards Troubleshoot and resolve issues related to performance, message delivery, and data integrity Document technical designs, integration logic, and deployment procedures Required Skills & Qualifications: 3+ years of experience in developing with IBM App Connect Enterprise (ACE) / IBM Integration Bus (IIB) Strong proficiency in ESQL, Java, and integration design patterns Experience with IBM MQ, Kafka, or other messaging systems Solid understanding of RESTful and SOAP services, including OpenAPI/Swagger Hands-on experience with DFDL, XML, JSON, XSLT Experience working in CI/CD environments with tools such as Git, Jenkins, and UrbanCode Deploy Familiarity with containerization (Docker, Kubernetes) and cloud deployments (Azure, AWS, GCP) Strong problem-solving and debugging skills Preferred Qualifications: IBM Certified Developer – App Connect Enterprise certification Experience with hybrid cloud integrations Knowledge of event-driven architecture and API gateways Understanding of data security and encryption practices in integration

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Precisely is the leader in data integrity. We empower businesses to make more confident decisions based on trusted data through a unique combination of software, data enrichment products and strategic services. What does this mean to you? For starters, it means joining a company focused on delivering outstanding innovation and support that helps customers increase revenue, lower costs and reduce risk. In fact, Precisely powers better decisions for more than 12,000 global organizations, including 93 of the Fortune 100. Precisely's 2500 employees are unified by four company core values that are central to who we are and how we operate: Openness, Determination, Individuality, and Collaboration. We are committed to career development for our employees and offer opportunities for growth, learning and building community. With a "work from anywhere" culture, we celebrate diversity in a distributed environment with a presence in 30 countries as well as 20 offices in over 5 continents. Learn more about why it's an exciting time to join Precisely! Overview As a Principal Software Engineer, you will be part of the team that designs and develops cloud applications in the data Integrity domain. You will be deeply involved in designing, developing, and unit testing the applications in our next-generation Data Integrity Suite platform based on k8s. You will work closely with Software engineers, data scientists, and product managers to develop and deploy data-driven solutions that deliver business value. You will contribute to the best practices, standards, and technical roadmap. What You Will Do Lead and contribute to end-to-end product development, with 5 to 7+ years of experience in designing and building scalable, modern cloud-based applications. Take full technical ownership of product features, from design to deployment, ensuring high-quality deliverables. Responsible for unit-level design, implementation, unit and integration testing, and overall adherence to SDLC best practices. Experience with microservices architecture, containerization (Docker/Kubernetes). Drive and participate in technical design discussions, architecture reviews, and ensure robust, scalable, and maintainable solutions. Collaborate effectively with cross-functional teams, including product managers, architects, DevOps, QA, and other engineering teams. Participate in and enforce peer code reviews, ensuring best practices and continuous improvement in code quality and maintainability. Continuously evaluate and adopt emerging technologies and frameworks to enhance system architecture and team productivity. Embrace an Agile development environment, participate in sprints, and adapt to changes as needed. Hands-on experience with technologies such as MongoDB, Kafka, and other modern distributed system components. Strong communication skills and the ability to work in a global team environment. Familiarity with monitoring, observability tools, or performance tuning. What We Are Looking For Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 10+ years of experience in developing enterprise-grade software. Demonstrated ability to technically lead product features through the full SDLC — design, development, testing, and deployment. Experience delivering multi-tenant SaaS solutions and working in Agile development environments. Up to 3 years of hands-on experience with cloud stack solutions (AWS, Azure, or GCP preferred). Technical Skills: Strong Object-Oriented Programming (OOP) fundamentals with in-depth knowledge of Java and Spring Boot. Solid understanding of Design Patterns and Architectural Patterns, with proven ability to apply them effectively. Experience with Kafka or any other messaging system (RabbitMQ, etc.). Kafka preferred. Experience with RESTful APIs, and building scalable, modern web applications. Proficiency in databases: SQL, MySQL, MongoDB. Redis is a plus. Experience with CI/CD tools and processes (e.g., Jenkins, Git, Artifactory, JIRA). Familiarity with Git, TDD (Test-Driven Development), and Linux shell commands. Cloud & DevOps: Exposure to cloud-native technologies like Docker, Kubernetes, and microservices architecture. Hands-on experience or understanding of AWS, Azure, or GCP cloud platforms is an added advantage. Soft Skills: Strong problem-solving and debugging skills. Excellent interpersonal and communication skills. Ability to collaborate with diverse, distributed, cross-functional teams. The personal data that you provide as a part of this job application will be handled in accordance with relevant laws. For more information about how Precisely handles the personal data of job applicants, please see the Precisely Global Applicant and Candidate Privacy Notice.

Posted 1 day ago

Apply

14.0 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Senior Engineering Manager on Twilio’s Traffic Intelligence team. About The Job This position is needed to manage the team of machine learning engineers of the Growth & User Intelligence team and closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build ML and Data Science products that work at a global scale and own end-to-end execution of large scale ML solutions. As a senior manager, you will closely partner with technology and product leaders in the organization to enable the engineers to turn ideas into reality. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions for Traffic Intelligence vertical. Be a champion for your team, setting individuals up for success and putting others’ growth first. Understand the architecture and processes required to build and operate always-available complex and scalable distributed systems in cloud environments. Advocate agile processes, continuous integration and test automation. Be a strategic problem solver and thrive operating in broad scope, from conception through continuous operation of 24x7 services. Exhibit strong communication skills: in person, or on paper. You can explain technical concepts to product managers, architects, other engineers, and support. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required You have a minimum of 14+ years experience with 5 years of proven track record of leading and managing software teams. Experience managing multiple workstreams within the team Bachelor’s or Master’s degree in Computer Science, Engineering or related field. Technical Experience with: Applied ML models with proficiency in Python Experience in modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience in Cloud technologies like AWS, GCP etc. Experience in ML frameworks like PyTorch, TensorFlow, or Keras etc. SaaS Telemetry and Observability tools such as Datadog, Graphana etc. Excellent problem solving, critical thinking, and communication skills. Broad knowledge of development environments and tools used to implement and build code for deployment. Have strong familiarity with agile processes, continuous integration, and a strong belief in automation over toil. As a pragmatist, you are able to distill complex and ambiguous situations into actionable plans for your team. Owned and operated services end-to-end, from requirements gathering and design, to debugging and testing, to release management and operational monitoring. Desired Experience with Large Language Models Experience designing and implementing highly scalable and performant ML models. Location This role will be remote, and based in India(Karnataka, Tamil Nadu, Telangana, Maharashtra & New Delhi) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Lowe’s Lowe’s is a FORTUNE® 100 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowe’s operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com Lowe’s India, the Global Capability Center of Lowe’s Companies Inc., is a hub for driving our technology, business, analytics, and shared services strategy. Based in Bengaluru with over 4,500 associates, it powers innovations across omnichannel retail, AI/ML, enterprise architecture, supply chain, and customer experience. From supporting and launching homegrown solutions to fostering innovation through its Catalyze platform, Lowe’s India plays a pivotal role in transforming home improvement retail while upholding strong commitment to social impact and sustainability. For more information, visit Lowes India About The Team This team at a Fortune 100 tech company in the retail domain is responsible for building and maintaining critical enterprise platforms and frameworks that empower internal developers and drive key business functions. Their work spans the entire software development lifecycle and customer journey, encompassing tools like an Internal Developer Portal, front-end frameworks, A/B testing and customer insights platforms, workflow and API management solutions, a Customer Data Platform (CDP), and robust testing capabilities including performance and chaos testing. This team is instrumental in providing the foundational technology that enables innovation, efficiency, and a deep understanding of their customers Job Summary The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver code modules, stable application systems, and software solutions. This includes developing, configuring, or modifying integrated business and/or enterprise application solutions within various computing environments. This role will be working closely with stakeholders and cross-functional departments to communicate project statuses and proposals. Core Responsibilities Translates business requirements and specifications into logical program designs, code modules, stable application systems, and software solutions with occasional guidance from senior colleagues; partners with the product team to understand business needs and functional specifications. Develops, configures, or modifies integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using various programming languages. Tests application using test-driven development and behavior-driven development frameworks to ensure the integrity of the application. Conducts root cause analysis of issues and participates in the code review process to identify gaps. Implements continuous integration/continuous delivery processes to ensure quality and efficiency in the development cycle using DevOps automation processes and tools. Ideates, builds, and publishes reusable libraries to improve productivity across teams. Conducts the implementation and maintenance of complex business and enterprise software solutions to ensure successful deployment of released applications. Solves difficult technical problems to ensure solutions are testable, maintainable, and efficient. Years Of Experience 2 years of experience in software development or a related field 2 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) through iterative agile development. 2 years' experience working with any of the following: frontend technologies (user interface/user experience), middleware (microservices and application programming interfaces), database technologies, or DevOps. Required Minimum Qualifications 2 years of experience writing technical documentation in a software environment and developing and implementing business systems within an organization Bachelor's degree in computer science, computer information systems, or related field (or equivalent work experience in lieu of degree). Skill Set Required Core Java Proficiency: Deep understanding of Java fundamentals, data structures, algorithms, and best practices. Spring Framework (especially Spring Boot): Experience building and deploying applications with Spring Boot, including dependency injection, RESTful API development, and data persistence. Microservices Architecture: Understanding of microservice principles, design patterns, and experience building and deploying distributed systems. Kafka Expertise: Hands-on experience with Kafka for message queuing, event streaming, and building asynchronous communication between services. API Design & Development: Proficiency in designing and implementing robust and scalable RESTful APIs. SQL & NoSQL Databases: Experience working with both relational (SQL) databases and NoSQL databases (e.g., MongoDB, Elastic), including data modeling and query optimization for each. Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. Responsibilities: Software Development & System Design Lead the end-to-end design and development process—from user research and prototyping to final implementation and delivery. Architect scalable, modular, and maintainable frontend solutions, including micro frontend frameworks and component-based design. Drive UX strategy through data-backed insights, usability testing, user journey mapping, and design thinking workshops. Build and maintain a design system that ensures consistency, accessibility (WCAG compliance), and alignment between design and development. Design, develop, and maintain robust, scalable, and high-performance applications. Ensure high levels of unit test coverage, test-driven development (TDD), and behavior-driven development (BDD). Actively contribute to hands-on coding, code reviews, and refactoring to maintain high engineering standards. Research and understand DevOps best practices based on industry and Citi standards. Design software components in a microservices cloud-native architecture to be resilient, stateless, scalable, and testable with automation and reusability as key objectives. Develop services and APIs in Java, Spring boot frameworks utilizing latest frameworks and libraries with an emphasis on design patterns, code quality, secure coding practices and writing testable code with tests. Implement automated build, test and deployment pipelines utilizing latest DevOps tools available at Citi. Partner with QA engineers to develop test cases and build out an automated testing suite for both API and microservices. Collaborate with other development teams to build shared libraries and components for reuse across the organization. Participate in daily Scrum ceremonies and conduct sprint demo for stakeholders. Partner with support teams to formally handover the software released to production and provide rotational support for the platform. Proactively create and manage relevant application documentation using Confluence, JIRA, and SharePoint. Qualifications: 12-15 years for strong application development. 10+ years of experience in application development using Java , Microservices ,SQL and messaging platforms such as Kafka, MQ etc. Experienced in Spring framework, Spring boot technologies. Experienced in API development & application security best practices (Oath, TLS, PKI etc.) Passion and commitment for adopting industry best practices and new technologies with exploratory mind-set Proactive, detail-oriented, and self-motivated professional who can hit the ground running Experience working in an Agile/Scrum work environment. Strong communication and presentation skills. Ability to manage tight deadlines or unexpected priority changes, excellent time management. Willingness to ask questions and challenge the status-quo. Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description: Senior Backend Engineer Position Overview: Maersk Group are in the process of strengthening its application development organization with a focus on driving ownership, predictability, agility, and lowering time to delivery. As part of this exciting growth, we are seeking an experienced and proactive Senior Backend Engineer to join our team in maintaining and evolving a production-ready platform. As the system has already been built and deployed, this role will focus on supporting live operations , enhancing performance , and driving new feature development based on user needs and business priorities. The role will involve contributing on to a cross functional agile team. You will work on improving scalability, reliability, and observability, while contributing to a secure, event-driven microservices architecture. This is a high-ownership role requiring deep backend expertise, problem-solving ability, and a mindset for continuous improvement. Key Responsibilities: Feature Enhancements & Iteration Collaborate with product and frontend teams to design and deliver new features and APIs that align with business needs. Refactor and optimize existing services to enhance performance and maintainability. Apply Test-Driven Development (TDD) and/or Behaviour-Driven Development (BDD) practices to ensure robust, verifiable functionality. Ensure backward compatibility and data integrity during feature rollouts. Infrastructure & CI/CD Manage and deploy services in Kubernetes across AWS/GCP/Azure/Private Cloud environments. Maintain and improve CI/CD pipelines using GitHub Actions, enabling faster and safer releases. Platform Support & Reliability Monitor and maintain production backend services to ensure high availability, performance and resilience. Quickly investigate and resolve issues across distributed systems, with strong debugging and root cause analysis skills. Use observability tools like Prometheus, Grafana and distributed tracing to proactively detect anomalies. Participate in incident response processes, including on-call support if needed and contribute to postmortems and preventive actions. Collaboration Work closely with other engineers and stakeholders to plan, prioritize and deliver features/improvements. Participate in code reviews, architecture discussions and mentoring of junior team members. Champion best practices in event-driven design, DDD, clean code and secure access via RBAC. Required Skills: 8-12 years of professional backend development experience. Proficiency in Kotlin or similar JVM-based languages. Proficiency in Spring Framework (Core, Boot, Reactive Stack and Servlet Stack) Hands-on experience with Apache Kafka and Kafka Streams(plus) Deep knowledge of Event-Driven Microservices and DDD patterns. Experience implementing RBAC and working with Keycloak, OAuth2/OIDC and LDAP. Proven experience with TDD/BDD methodologies for backend development. Strong experience with Kubernetes and container orchestration. Experience in cloud platforms (AWS, GCP, Azure) and Private Cloud infrastructure. Solid experience with SQL based database(PostgreSQL) and MongoDB. Familiarity with GIS systems like GraphHopper/OpenStreetMap is a strong plus. Experience using Prometheus, Grafana and GitHub-based CI/CD workflows. Understanding of DevSecOps, Lean Development and automation-first mindset. Experience with build tools such as Maven and Gradle for managing dependencies and project builds. Strong understanding of version control using Git – branching strategies, pull requests, and code reviews. Experience with Keycloak, OAuth2/OIDC and LDAP-based authentication systems. Strong problem-solving skills and ability to work in a collaborative, fast-paced environment. Preferred Skills: Knowledge of distributed system design and data streaming best practices. Exposure to frontend technologies or APIs consumed by modern UIs. Experience in Agile or Scrum-based teams. Familiarity with MapLibre or any frontend mapping libraries (a plus for collaboration). Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Full Stack Architect Experience: 10+ Years Location: Pune Notice Period: Immediate Joiners/ Max. 15 days preferred Key Responsibilities: Design and own the architecture for scalable and distributed systems using modern technologies (Node.js, NestJS, Laravel, ReactJS, AWS, etc.) Translate business requirements into high-level technical solutions and architecture documentation. Define system standards, development processes, and deployment strategies across applications. Provide technical leadership across the full software development lifecycle, including architecture reviews, code quality, and DevOps practices. Collaborate with cross-functional teams (product managers, developers, QA, DevOps) to align technology with business goals. Guide teams on best practices in microservices, APIs, cloud-native patterns, and database design. Lead and mentor development teams, conduct architecture workshops, and participate in sprint planning and delivery. Ensure solution designs meet performance, scalability, availability, and security requirements. Evaluate and recommend emerging tools, technologies, and frameworks that improve development productivity and product quality. Key Skills & Technologies: Architecture & Design: Microservices, Distributed Systems, API Design, Event-Driven Architecture Backend: Node.js, NestJS, PHP, Laravel Frontend: ReactJS, Angular, Vue.js, GraphQL Cloud & DevOps: AWS (Lambda, EC2, ECS, S3, RDS, Elasticache), Azure, Docker, GitLab CI/CD, Jenkins Databases: MongoDB, MySQL, PostgreSQL, Elasticsearch Others: Kafka, REST/GraphQL APIs, Agile (Scrum/Kanban), Jira, Confluence.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary: Senior Engineer 2 (SDET) Location: New Delhi Division: Ticketmaster Sport International Engineering Line Manager: Andrew French Contract Terms: Permanent THE TEAM Ticketmaster Sport is the global leader in sports ticketing. From the smallest clubs to the biggest leagues and tournaments, we are trusted as their ticketing partner. You will be joining the Ticketmaster Sports International Engineering division which is dedicated to the creation and maintenance of industry standard ticketing software solutions. Our software is relied upon by our clients to manage and sell their substantial ticketing inventories. Our clients include some of the highest profile clubs and organisations in sport. Reliability, quality, and performance are expected by our clients. We provide an extensive catalogue of hosted services including back-office tooling, public-facing web sales channels, and other services and APIs. The team you will join is closely involved in all these areas. The Ticketmaster Sports International Engineering division comprises distributed software development teams working together in a highly collaborative environment. You will be joining our expanding engineering team based in New Delhi. THE JOB You will be joining a Microsoft .Net development team as a Senior Quality Assurance Engineer. The team you will be joining is responsible for data engineering in the Sport platform. This includes developing back-end systems which integrate with other internal Ticketmaster systems, as well as with our external business partners. You will be required to work with event-driven systems, message queueing, API development, and much more besides. There is a tremendous opportunity for you to make a difference. We are looking for QA engineers who can help us drive our platform forward from a quality assurance point of view, as well as act as a mentor for more junior members of the team. You will be working very closely with the team lead to ensure the quality of our software and to assist in the planning and decision-making process. Apart from standard manual testing activities you will help improve our automated test suites, as well as be involved with performance testing. In essence, your job will be to ensure our software solutions are of the highest quality, robustness, and performance. WHAT YOU WILL BE DOING Design, build, and maintain scalable and reusable test automation frameworks using C# .Net and Selenium. Collaborate with developers, product managers, and QA to understand requirements and build comprehensive test plans. Defining, developing, and implementing quality assurance practices and procedures and test plans. Create, execute, and maintain automated functional, regression, integration and performance tests. Ensure high code quality and testing standards across the team through code reviews and best practices. Investigate test failures, diagnose bugs and file detailed bug reports Producing test and quality reports. Integrate test automation with CI/CD pipelines (GitLab, Azure Devops, Jenkins). Operating effectively within an organisation with teams spread across the globe. Working effectively within a dynamic team environment to define and advocate for QA standards and best practices to ensure the highest level of quality. TECHNICAL SKILLS Must have: 5+ years of experience in test automation development, preferably in an SDET role Strong hands-on experience with C# .Net and Selenium Webdriver. Experience in tools like NUnit, Specflow, or similar test libraries. Solid understanding of object-oriented programming (OOP) and software design principles. Experience developing and maintaining custom automation frameworks from scratch. Proficiency in writing clear, concise and comprehensive test cases and test plans. Experience of working in scrum teams within Agile methodology. Experience in developing regression and functional test plans, managing defects. Understand Business requirements and identify scenarios of Automated and manual testing Experience in performance testing using Gatling. Experience working with Git CI/CD pipelines. Experience with web service e.g. RESTful services testing including test automation with Rest Assured/Postman. Be proficient working with relational databases such as MSSQL or other relational databases. A deep understanding of Web protocols and standards (e.g. HTTP, REST). Strong problem-solving mindset and a detail-oriented mindset. Nice to have: Exposure to performance testing tools Testing enterprise applications deployed to cloud environments such as AWS. Experience on static code analysis tools like SonarQube etc. Building test infrastructures using containerisation technologies such as Docker and working with continuous delivery or continuous release pipelines. Experience in microservice development. Experience with Octopus Deploy. Experience with TestRail. Experience with event-driven architectures, messaging patterns and practices. Experience with Kafka, AWS SQS or other similar technologies. YOU (BEHAVIOURAL SKILLS) Excellent communication and interpersonal skills. We work with people all over the Globe using English as a shared language. As a senior engineer you will be expected to help managers make decisions by describing problems and proposing solutions. To be able to respond positively to challenge. Excellent problem-solving skills. Desire to take on responsibility and to grow as a quality assurance software engineer. Enthusiasm for technology and a desire to communicate that to your fellow team members. The ability to pick up any ad-hoc technology and run with it. Continuous curiosity for new technologies on the horizon. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Quality Engineering Lead (Test Lead) Project Role Description : Leads a team of quality engineers through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Applies business and functional knowledge to develop end-to-end testing strategies through the use of quality processes and methodologies. Applies testing methodologies, principles and processes to define and implement key metrics to manage and assess the testing process including test execution and defect resolution. Must have skills : Automated Testing Good to have skills : Selenium, Core Banking, Java Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Quality Engineering Lead (Test Lead), you will lead a team of quality engineers through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. You will apply business and functional knowledge to develop end-to-end testing strategies using quality processes and methodologies. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead team planning and ecosystem integration - Develop end-to-end testing strategies - Define and implement key metrics to manage and assess the testing process Professional & Technical Skills: -Must have 5-8 years of experience in API test automation, with a strong focus on developing automated test scripts and frameworks. - Must have hands-on experience with API testing tools like Postman, Rest Assured or similar tools. - Must Have Skills: Proficiency in Automated Testing, Selenium, API Testing - Strong understanding of test automation frameworks - Must have proficiency in scripting languages like Java, JavaScript, or Python to automate test scripts. - Expertise in mocking and stubbing APIs using tools like WireMock, Mock Server, or other service virtualization tools. - Hands on experience on Testing/New Man Automation/Karate API Automation - Experience in Enhancing/Creation of BDD Automation Framework for GUI/API. - Experience in BDD concepts such as Cucumber, Maven, TestNG etc. - Good To Have Skills: Experience with Selenium and Core Banking. - Knowledge of microservices architecture and API interactions, and experience with Docker. - Experience with Kafka consumer/producer testing - Ability to create and validate API data for testing purposes. Additional Information: - The candidate should have a minimum of 7 years of experience in Automated Testing. - This position is based at our Gurugram office, Its mandate to work from Gurugram 3 days/week - A 15 years full time education is required.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Software Engineer – Senior Location: Chennai (Hybrid)34258 Employment Type: Full-Time Compensation: Up to ₹21 LPA Experience: 5–7 Years Notice Period: Immediate joiners or candidates with up to 30 days' notice preferred Mandatory Skills Java (with at least 2+ years hands-on experience in GCP) J2EE, Spring Boot, Spring Cloud GCP (Google Cloud Platform) PostgreSQL CI/CD tools (e.g., Jenkins, Tekton) Docker and Kubernetes Key Responsibilities Develop and maintain REST-based microservices using Spring Boot, Spring MVC, and related frameworks. Design scalable, cloud-native solutions deployed on Google Cloud Platform (GCP). Create web front ends using JavaScript frameworks such as Angular or React. Work in cloud container environments (Docker, Kubernetes, OpenShift) for application deployment and orchestration. Perform SQL and NoSQL data manipulation in databases such as PostgreSQL, SQL Server, Teradata, and BigQuery. Implement and manage build pipelines using CI/CD tools (Jenkins, Tekton, Gradle). Utilize Git for source control and participate in collaborative code reviews. Follow engineering best practices including Test-Driven Development (TDD), paired programming, clean code, and Agile development methodologies. Work with messaging and streaming tools like Kafka and MQTT. Utilize tools such as Terraform for infrastructure provisioning in cloud environments. Preferred Qualifications Experience practicing Extreme Programming (XP) disciplines such as test-first development and mob/pair programming. Knowledge and implementation experience in Spring Boot microservices architecture. Familiarity with Clean Code principles and Lean software development. Educational Requirements Required: Bachelor’s Degree in Computer Science, Engineering, or related field Preferred: Master’s Degree Skills: bigquery,postgresql,teradata,sql,ci/cd tools,gcp,git,spring boot,react,angular,javascript frameworks,terraform,boot,mqtt,cloud,j2ee,kafka,docker,kubernetes,sql server,java,spring,spring cloud,nosql,software

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview Prodapt Solutions looking for a resource with handson in Openshift with Java. Locations - Chennai & Hyderabad Shift Timing: Rotational Work mode - In office Responsibilities Administer and manage Red Hat OpenShift clusters, including upgrades, scaling, security, and backup strategies. Deploy, maintain, and troubleshoot Jenkins instances, plugins, agents, and build pipelines. Troubleshoot CI/CD pipeline issues related to Jenkins, OpenShift builds/deployments, and integrations. Implement and maintain monitoring and alerting systems using tools like Prometheus, Grafana, ELK stack, or similar. Automate infrastructure tasks and deployments using Infrastructure-as-Code tools (e.g., Ansible, Helm, Terraform). Collaborate with developers, QA, and other DevOps team members to support build and deployment processes. Maintain system documentation, operational runbooks, and SOPs. Java or other coding skill is must:- Java: knowledge on higher versions of Java i.e; java 8 and above Knowledge on producers and consumers Hands on experience on spring boot, microservices Should be able to write complex queries in Oracle or postgres Additional knowledge on any cloud platform would be helpful Additional knowledge on Kafka, docker would also be helpful Hands on experience in resolving any kind of security issues Requirements Administer and manage Red Hat OpenShift clusters, including upgrades, scaling, security, and backup strategies. Deploy, maintain, and troubleshoot Jenkins instances, plugins, agents, and build pipelines. Troubleshoot CI/CD pipeline issues related to Jenkins, OpenShift builds/deployments, and integrations. Implement and maintain monitoring and alerting systems using tools like Prometheus, Grafana, ELK stack, or similar. Automate infrastructure tasks and deployments using Infrastructure-as-Code tools (e.g., Ansible, Helm, Terraform). Collaborate with developers, QA, and other DevOps team members to support build and deployment processes. Maintain system documentation, operational runbooks, and SOPs. Java or other coding skill is must:- Java: knowledge on higher versions of Java i.e; java 8 and above Knowledge on producers and consumers Hands on experience on spring boot, microservices Should be able to write complex queries in Oracle or postgres Additional knowledge on any cloud platform would be helpful Additional knowledge on Kafka, docker would also be helpful Hands on experience in resolving any kind of security issues

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Job Summary Synechron is seeking a highly skilled Node.js Developer to join our dynamic team. The successful candidate will be responsible for developing, implementing, and maintaining scalable, high-performance applications using Node.js technologies. This role is integral to delivering innovative financial technology solutions that align with business objectives. The position offers an opportunity to work in a collaborative environment, contributing to cutting-edge projects that enhance client services and operational efficiency. Software Requirements Proficiency in Node.js Extensive experience with JavaScript Strong understanding of databases and data management systems (preferred experience with SQL and NoSQL databases) Hands-on experience with TypeScript Familiarity with GraphQL for API development Experience with containerization tools such as Docker and orchestration with Kubernetes Knowledge of CI/CD pipelines and tools (e.g., Jenkins, GitLab CI) Experience with messaging queues such as Kafka, AWS SQS, or Azure Service Bus Working knowledge of API gateways like 3Scale (preferred) Security expertise in token-based authentication protocols (e.g., REST, JWT, OAuth) Overall Responsibilities Develop and maintain scalable, high-performance applications using Node.js and related technologies Design and implement RESTful APIs and integrate GraphQL interfaces Collaborate with cross-functional teams to deliver high-quality solutions on time and within scope Participate in CI/CD workflows, troubleshoot issues within distributed service ecosystems, and optimize application performance Manage containerization and deployment in cloud environments using Docker and Kubernetes Implement security protocols, including Single Sign-On and token-based authentication systems Oversee message queue management and integrate with cloud messaging services Stay current with technological advancements in Node.js and related frameworks to recommend improvements Ensure adherence to best coding practices, security standards, and documentation protocols Technical Skills (By Category) Programming Languages: Node.js (required), JavaScript (required), TypeScript (required) Databases/Data Management: SQL, NoSQL (preferred) Cloud Technologies: AWS, Microsoft Azure (preferred) Frameworks and Libraries: GraphQL, Express.js (or similar) Development Tools and Methodologies: Git, Jenkins, Docker, Kubernetes, Agile methodologies Security Protocols: REST API security, JWT, OAuth, Single Sign-On (SSO) Experience Requirements Minimum of 5 years experience in software development with Node.js and JavaScript At least 3 years of practical experience working with TypeScript Proven expertise in performance tuning, debugging, and monitoring applications Domain-specific experience in financial services, banking, or fintech sectors is preferred, especially within enterprise environments Diverse industry experience or relevant alternative pathways; candidates with project experience in relevant sectors are encouraged to apply Day-to-Day Activities Develop, test, and deploy high-availability applications and APIs Collaborate in agile teams through daily stand-ups, planning, and review sessions Troubleshoot technical issues and optimize system performance Participate in code reviews and maintain comprehensive documentation Integrate new technologies and tools to improve development workflows Engage with stakeholders to clarify requirements and communicate progress Monitor system health and apply fixes to ensure optimal operation Qualifications Bachelor’s or postgraduate degree in Computer Science, Software Engineering, or a related field, or equivalent industry experience Relevant certifications in cloud platforms, security, or development frameworks are advantageous Commitment to continuous learning and professional development in emerging technologies Professional Competencies Analytical and problem-solving skills with a focus on technical excellence Ability to influence and collaborate across teams to drive project success Strong communication skills for technical and non-technical stakeholders Adaptability to evolving project requirements and rapid technological changes Innovation-driven mindset with a focus on process improvement Effective time and priority management skills S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

”Accelerating business to improve the lives of people”. This is our purpose statement and encapsulates what we enthusiastically do every day. We integrate our customers’ IT systems to make sure that the right data is at the right place at the right time when they digitalize their processes. Companies need their systems to talk to each other to ensure that cars roll off the factory line, that everyone receives their payments on time, and that you can buy what you need from a supermarket. Our success story began in 1986, when we helped the German automotive industry to digitalize their paper-based supply chains. Today, SEEBURGER is a leading global B2B software provider with more than 1,000 #businessaccelerators in 15 countries worldwide and over 10,000 satisfied customers that rely on our innovative solutions. About Seeburger SEEBURGER AG is a German software company founded in 1986. Our primary product is the SEEBURGER Business Integration Suite (BIS).We exploit the full potential of digitalization by connecting the IT systems with each other and exactly what we do for our customers. To put simply: We are our customers' "business accelerators" - and keep the economy moving. Currently we operate with more than 1,000 employees across 11 countries Requirements  Bachelor’s degree in Computer Science, Information Technology, or related field.  At least 5+ years of professional experience with BACKEND Java development.  Solid basis in core Java libraries and Design patterns.  Expertize in implementing Communication protocols and should not be a User of those protocols  Proven experience as a Cloud Developer with a focus on Kubernetes and multi-cloud environments. o Should know what ingress is and should know how to use it.  Strong programming skills in Java.  Hands-on experience with container orchestration using Kubernetes.  In-depth knowledge of AWS, Azure, and GCP services and offerings.  Excellent problem-solving and troubleshooting skills.  Strong communication and collaboration skills.  Ability to work independently 1. Cloud-Native Development: o Design, develop, and deploy cloud-native applications using Kubernetes as the primary orchestration platform. o Implement microservices architecture, containerization, and serverless computing for optimal scalability and efficiency. 2. Kubernetes Expertise: o Demonstrate proficiency in managing Kubernetes clusters, including deployment, configuration, scaling, and monitoring. 3. Multi-Cloud Integration: o Utilize your expertise in AWS, Azure, and GCP to architect and implement solutions that seamlessly span multiple cloud providers. o Design and implement cross-cloud deployment strategies for enhanced redundancy and disaster recovery. 4. Automation and Infrastructure as Code: o Practices in Infrastructure as Code (IaC) using tools such as Terraform or CloudFormation not needed – but could help. o Automate deployment, scaling, and management processes to enhance operational efficiency… not needed – but could help. 5. Monitoring and Optimization: o Implement communication and security modules, monitoring and logging solutions to ensure the health and performance of cloud-based applications. o Identify and execute optimization strategies to enhance resource utilization and costeffectiveness. 6. Collaboration and Communication: o Collaborate with cross-functional teams, including DevOps, QA, and product management, to ensure the successful delivery of cloud solutions. o Clearly communicate technical concepts and solutions to both technical and nontechnical stakeholders. TECH STACK  Networking programming: TCP sockets, TLS, Proxies, Firewall.  Multithreading programing  Communication protocols HTTP/S, REST, SOAP, Web Service, SAP IDOC, SAP tRFC, SFTP, FTP, EDIINT AS2, ebXML, RosettaNet, OFTP, OPC UA, HDFS, Kafka, JMS, Mail.  Security concepts and cryptography asymmetric and symmetric keys, X.509 certificates, PGP, SSH, Hardware Security Module (HSM), BouncyCastle  Security protocols: S/MIME, CMS, OpenPGP, XMLDsig, EDIFACTSecure, PDFSignature, ICAP  HTTP Technologies: Tomcat, Web Sockets, CXF, Apache HTTP API, HTTP 1.0 and 2.0, XML, JSON  Public Clouds Azure, AWS, Google o Knowledge of Docker containers, Kubernetes, and micro services is a plus  Frameworks: OSGi frameworks – our platform uses Apache Karaf  Databases: MSSQL, Oracle, PostgreSQL  Operating systems: MS Windows, Linux  Our toolset includes Eclipse, GIT, Gerrit, Maven, Jenkins, Sonar, Junit, Open Project  Ability to design software in a scalable and distributed environment  Experiences with Scrum Roles & Responsibilities  Design and develop backend solutions for our state of the art integration platform  Work in a flexible global team  Develop security concepts and protocols with symmetric and asymmetric key cryptography  Implement communication protocols like http, REST, Web Service etc.  Continuously improve and enhance our products  Adapt to changing requirements  Create high quality by using agile development practices  Ability to meet deadlines Location : Chennai, India Department: Development Experience: Professional Working Model : Work from office (5 days a week) Benefit from being part of a globally renowned company that is driving digitalisation forward. We continue to grow - and so can you! It is important to us that you can fully utilise your talents and strengths and go your own way, regardless of whether you are aiming for a specialist or management career. With our expertise and growth in a future-oriented industry, we offer a wide range of opportunities and secure jobs. At SEEBURGER, we value the supportive atmosphere and family environment. #StrongerTogether is one of our corporate values and characterises the way we live together. Sounds exciting? Become a #Businessaccelerator today!

Posted 1 day ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a highly skilled and experienced Java Backend Engineer with over 6 years of hands-on experience in developing scalable and high-performance backend systems. The ideal candidate will have in-depth expertise in Java (Spring Boot) , GraphQL API design and implementation , and NoSQL databases, especially MongoDB . You will play a key role in designing backend architecture, building APIs, and contributing to core development in a collaborative and agile environment. Key Responsibilities: Design, develop, and maintain robust, scalable, and secure backend services using Java (Spring Boot) and GraphQL. Implement and maintain GraphQL APIs , ensuring efficient data fetching and schema design. Work extensively with MongoDB for schema design, data modeling, and query optimization. Collaborate with frontend engineers, product managers, and DevOps teams to deliver end-to-end solutions. Develop and maintain RESTful and GraphQL APIs to support web and mobile applications. Optimize application performance and ensure high availability, scalability, and fault tolerance. Participate in code reviews, unit testing, and integration testing to ensure code quality and reliability. Mentor junior engineers and contribute to best practices and coding standards . Troubleshoot, debug, and resolve production issues in a timely manner. Stay updated with emerging technologies and incorporate them where relevant. Required Skills and Qualifications: 6+ years of professional experience as a Backend Engineer with strong proficiency in Java (Spring Boot) . Proven expertise in GraphQL , including schema design, resolvers, and performance optimization. Strong experience with MongoDB , including indexing, aggregation pipelines, and data modeling. Proficient in developing RESTful and GraphQL APIs for large-scale applications. Familiarity with microservices architecture and containerization (Docker/Kubernetes) . Good understanding of CI/CD pipelines , version control (Git), and agile methodologies. Experience with unit testing frameworks like JUnit, Mockito, etc. Strong problem-solving skills and the ability to work independently and in a team. Excellent communication skills, both verbal and written. Preferred Qualifications: Experience with cloud platforms such as AWS, Azure, or GCP. Familiarity with API Gateway, Kafka, or other messaging systems . Knowledge of security best practices (OAuth2, JWT, encryption). Experience with performance monitoring tools (e.g., Prometheus, Grafana, New Relic).

Posted 1 day ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Solution Architect (Network Traffic & Flow Data systems) Location: Pune, India (with Travel to Onsite) Experience Required: 15+ years in solution architecture with at least 5 years in telecom data systems, network traffic monitoring, or real-time data streaming platforms. Overview : We are seeking a senior solution Architect to lead the design, integration, and delivery of a large-scale network traffic and data flow system. This role is accountable for ensuring architectural integrity, zero-error tolerance, and robust fallback mechanisms across the entire solution lifecycle. The architect will oversee subscriber data capture, DPI, DR generation, Kafka integration, DWH ingestion. and secure API-based retrieval, ensuring compliance and security regulations. Key Responsibilities: Own the end-to-end architecture spanning subscriber traffic capture, DPI, DR generation, Kafka streaming, and data lake ingestion. Design and document system architecture, data flow diagrams, and integration blueprints across DPI and traffic classification systems, nProbe, Kafka. Spark, and Cloudera CDP Implement fallback and error-handling mechanisms to ensure zero data loss and high availability across all layers. Lead cross-functional collaboration with network engineers, Kafka developers. data platform teams, and security stakeholders. Ensure data govemance, encryption, and compliance using tools like Apache Ranger, Atlas, SDX, and HashiCorp Vault. Oversee API design and exposure for customer access, including advanced search, session correlation, and audit logging. Drive SIT/UAT planning, performance benchmarking, and production rollout readiness. Provide technical leadership across multiple vendors and internal teams, ensuring alignment with Business requirements and regulatory standards, Required Skills & Qualifications: Proven experience in telecom-grade architecture involving DPI, IPFIX/NefFlow, and subscriber metadata enrichment. Deep knowledge of Apache Kafka, Spark Structured Streaming, and Cloudera CDP (HDFS, Hive, Iceberg, Ranger). Experience integrating QPtobe with Kafka and downstream analyfics platforms. Strong understanding of QoE metrics, A/B party correlation, and application traffic classification. Expertise in RESTful API design, schema management (Avro/JSON), and secure data access protocols. Familiarity with network interfaces (Gn/Gi, Radius, DNS) and traffic filtering strategies. Experience implementing fallback mechanisms, error queues, and disaster recovery strategies. Excellent communication, documentation, and stakeholder management skills. Cloudera Certified Architect / Kafka Developer / AWS or GCP Solution Architect. Security certifications (e.g., CISSP, CISM) will be advantageous

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Engineer We are looking for an experienced Data Engineer with strong expertise in Snowflake, dbt, Airflow, AWS, and modern data technologies like Python, Apache Spark, and NoSQL databases. The role focuses on designing, building, and optimizing data pipelines to support analytical and regulatory needs in the banking domain. Key Responsibilities Design and implement scalable and secure data pipelines using Airflow, dbt, Snowflake, and AWS services. Develop data transformation workflows and modular SQL logic using dbt for a centralized data warehouse in Snowflake. Build batch and near real-time data processing solutions using Apache Spark and Python. Work with structured and unstructured banking datasets stored across S3, NoSQL (e.g., MongoDB, DynamoDB), and relational databases. Ensure data quality, lineage, and observability through logging, testing, and monitoring tools. Support data needs for compliance, regulatory reporting, risk, fraud, and customer analytics. Ensure secure handling of sensitive data aligned with banking compliance standards (e.g., PII masking, role-based access). Collaborate closely with business users, data analysts, and data scientists to deliver production-grade datasets. Implement best practices for code versioning, CI/CD, and environment management Required Skills And Qualifications 5-8 years of experience in data engineering, preferably in banking, fintech, or regulated industries. Hands-on experience with: Snowflake (data modeling, performance tuning, security) dbt (modular SQL transformation, documentation, testing) Airflow (orchestration, DAGs) AWS (S3, Glue, Lambda, Redshift, IAM) Python (ETL scripting, data manipulation) Apache Spark (batch/stream processing using PySpark or Scala) NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra) Strong SQL skills and experience in performance optimization and cost-efficient query design. Exposure to data governance, compliance, and security in the banking industry. Experience working with large-scale datasets and complex data transformations. Familiarity with version control (e.g., Git) and CI/CD pipelines. Preferred Qualifications Prior experience in banking/financial services Knowledge of Kafka or other streaming platforms. Exposure to data quality tools (e.g., Great Expectations, Soda). Certifications in Snowflake, AWS, or dbt. Strong communication skills and ability to work with cross-functional teams. About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, result-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. Apply now if you’re ready to unleash your potential.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years of hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous. A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Software Engineer Consultant/Expert 34326 Location: Chennai Work Type: Contract (Onsite) Compensation: Up to ₹21–24 LPA (Based on experience) Notice Period: Immediate joiners preferred Experience: Minimum 7+ years (9 preferred) Position Summary Seeking a skilled and motivated Full Stack Java Developer to join a growing software engineering team responsible for building and supporting a global logistics data warehouse platform. This platform provides end-to-end visibility into vehicle shipments using GCP cloud technologies , microservices architecture , and real-time data processing pipelines . Key Responsibilities Design, develop, and maintain robust backend systems using Java, Spring Boot, and microservices architecture Implement and optimize REST APIs, and integrate with Pub/Sub, Kafka, and other event-driven systems Build and maintain scalable data processing workflows using GCP BigQuery, Cloud Run, and Terraform Collaborate with product managers, architects, and fellow engineers to deliver impactful features Perform unit testing, integration testing, and support functional and user acceptance testing Conduct code reviews and provide mentorship to other engineers to improve code quality and standards Monitor system performance and implement strategies for optimization and scalability Develop and maintain ETL/data pipelines to transform and manage logistics data Continuously refactor and enhance existing code for maintainability and performance Required Skills Strong hands-on experience with Java, Spring Boot, and full stack development Proficiency with GCP Cloud Platform, including at least 1 year of experience with BigQuery Experience with GCP Cloud Run, Terraform, and deploying containerized services Deep understanding of REST APIs, microservices, Pub/Sub, Kafka, and cloud-native architectures Experience in ETL development, data engineering, or data warehouse projects Exposure to AI/ML integration in enterprise applications is a plus Preferred Skills Familiarity with AI agents and modern AI-driven data products Experience working with global logistics, supply chain, or transportation domains Education Requirements Required: Bachelor’s degree in Computer Science, Information Technology, or related field Preferred: Advanced degree or specialized certifications in cloud or data engineering Work Environment Location: Chennai (Onsite required) Work closely with cross-functional product teams in an Agile setup Fast-paced, data-driven environment requiring strong communication and problem-solving skills Skills: rest apis,cloud run,bigquery,gcp,pub/sub,data,data engineering,kafka,microservices,terraform,cloud,spring boot,data warehouse,java,code,etl development,full stack development,gcp cloud platform

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Software Engineer Consultant/Expert – GCP Data Engineer 34350 Location: Chennai Engagement Type: Contract Compensation: Up to ₹18 LPA Notice Period: Immediate joiners preferred Work Mode: Onsite Role Overview This role is for a proactive Google Cloud Platform (GCP) Data Engineer who will contribute to the modernization of a cloud-based enterprise data warehouse. The ideal candidate will focus on integrating diverse data sources to support advanced analytics and AI/ML-driven solutions, as well as designing scalable pipelines and data products for real-time and batch processing. This opportunity is ideal for individuals who bring both architectural thinking and hands-on experience with GCP services, big data processing, and modern DevOps practices. Key Responsibilities Design and implement scalable, cloud-native data pipelines and solutions using GCP technologies Develop ETL/ELT processes to ingest and transform data from legacy and modern platforms Collaborate with analytics, AI/ML, and product teams to enable data accessibility and usability Analyze large datasets and perform impact assessments across various functional areas Build data products (data marts, APIs, views) that power analytical and operational platforms Integrate batch and real-time data using tools like Pub/Sub, Kafka, Dataflow, and Cloud Composer Operationalize deployments using CI/CD pipelines and infrastructure as code Ensure performance tuning, optimization, and scalability of data platforms Contribute to best practices in cloud data security, governance, and compliance Provide mentorship, guidance, and knowledge-sharing within cross-functional teams Mandatory Skills GCP expertise with hands-on use of services including: BigQuery, Dataflow, Data Fusion, Dataform, Dataproc Cloud Composer (Airflow), Cloud SQL, Compute Engine Cloud Functions, Cloud Run, Cloud Build, App Engine Strong knowledge of SQL, data modeling, and data architecture Minimum 5+ years of experience in SQL and ETL development At least 3 years of experience in GCP cloud environments Experience with Python, Java, or Apache Beam Proficiency in Terraform, Docker, Tekton, and GitHub Familiarity with Apache Kafka, Pub/Sub, and microservices architecture Understanding of AI/ML integration, data science concepts, and production datasets Preferred Experience Hands-on expertise in container orchestration (e.g., Kubernetes) Experience working in regulated environments (e.g., finance, insurance) Knowledge of DevOps pipelines, CI/CD, and infrastructure automation Background in coaching or mentoring junior data engineers Experience with data governance, compliance, and security best practices in the cloud Use of project management tools such as JIRA Proven ability to work independently in fast-paced or ambiguous environments Strong communication and collaboration skills to interact with cross-functional teams Education Requirements Required: Bachelor's degree in Computer Science, Information Systems, Engineering, or related field Preferred: Master's degree or relevant industry certifications (e.g., GCP Data Engineer Certification) Skills: bigquery,cloud sql,ml,apache beam,app engine,gcp,dataflow,microservices architecture,cloud functions,compute engine,project management tools,data science concepts,security best practices,pub/sub,ci/cd,compliance,cloud run,java,cloud build,jira,data,pipelines,dataproc,sql,tekton,python,github,data modeling,cloud composer,terraform,data fusion,cloud,data architecture,apache kafka,ai/ml integration,docker,data governance,infrastructure automation,dataform

Posted 1 day ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

mthree is seeking a Java Developer to join a highly regarded Multinational Investment Bank and Financial Services Company. Job Description: Role: Java Developer Team: Payment Gateway Location: Pune (Hybrid model with 2-3 days per week in the office) Key Responsibility Develop and Maintain Applications: Design, develop, and maintain server-side applications using Java 8 to ensure high performance and responsiveness to requests from the front-end. • Scalability Solutions: Architect and implement scalable solutions for client risk management, ensuring the system can handle large volumes of transactions and data. • Data Streaming and Caching: Utilize Kafka or Redis for efficient data streaming and caching, ensuring real-time data processing and low-latency access. • Multithreading and Synchronization: Implement multithreading and synchronization techniques to enhance application performance and ensure thread safety. • Microservices Development: Develop and deploy microservices using Spring Boot, ensuring modularity and ease of maintenance. • Design Patterns: Apply design patterns to solve complex software design problems, ensuring code reusability and maintainability. • Linux Optimization: Ensure applications are optimized for Linux environments, including performance tuning and troubleshooting. • Collaboration: Collaborate with cross-functional teams, including front-end developers, QA engineers, and product managers, to define, design, and ship new features. • Troubleshooting: Troubleshoot and resolve production issues, ensuring minimal downtime and optimal performance. Requirements: • Educational Background: Bachelor’s degree in computer science, Engineering, or a related field. • Programming Expertise: Proven experience (c2-5 years) in Java 8+ programming, with a strong understanding of object-oriented principles and design. • Data Technologies: Understanding of Kafka or Redis (or similar Cache), including setup, configuration, and optimization. • Concurrency: Experience with multithreading and synchronization, ensuring efficient and safe execution of concurrent processes. • Frameworks: Proficiency in Spring Boot, including developing RESTful APIs and integrating with other services. • Design Patterns: Familiarity with design patterns and their application in solving software design problems. • Operating Systems: Solid understanding of Linux operating systems, including shell scripting and system administration. • Problem-Solving: Excellent problem-solving skills and attention to detail, with the ability to debug and optimize code. • Communication: Strong communication and teamwork skills, with the ability to work effectively in a collaborative environment. Preferred Qualifications: • Industry Experience: Experience in the financial services industry is a plus. • Additional Skills: Knowledge of other programming languages and technologies, such as Python or Scala. • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Java Developer

Posted 1 day ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

About Cisive Cisive is a trusted partner for comprehensive, high-risk compliance-driven background screening and workforce monitoring solutions, specializing in highly regulated industries—such as healthcare, financial services, and transportation. We catch what others miss, and we are dedicated to helping our clients effortlessly secure the right talent. As a global leader, Cisive empowers organizations to hire with confidence. Through our PreCheck division, Cisive provides specialized background screening and credentialing solutions tailored for healthcare organizations, ensuring patient and workforce safety. Driver iQ, our transportation-focused division, delivers FMCSA-compliant screening and monitoring solutions that help carriers hire and retain the safest drivers on the road. Unlike traditional background screening providers, Cisive takes a technology-first approach powered by advanced automation, human expertise, and compliance intelligence—all delivered through a scalable platform. Our solutions include continuous workforce monitoring, identity verification, criminal record screening, license monitoring, drug & health screening, and global background checks. Job Summary The Senior Software Developer is responsible for designing and delivering complex, scalable software systems, leading technical initiatives, and mentoring junior developers. This role plays a key part in driving high-impact projects and ensuring the delivery of robust, maintainable solutions. In addition to core development duties, the role works closely with the business to identify opportunities for automation and web scraping to improve operational efficiency. The Senior Software Developer will collaborate with Cisive’s Software Development team and client stakeholders to support, analyze, mine, and report on IT and business data—focusing on optimizing data handling for web scraping processes. This individual will manage and consult on data flowing into and out of Cisive systems, ensuring data integrity, performance, and compliance with operational standards. The role is critical to achieving service excellence and automation across Cisive’s diverse product offerings and will continuously strive to enhance process efficiency and data flow across platforms. Duties And Responsibilities Lead the design, architecture, and implementation of scalable and maintainable web scraping solutions using the Scrapy framework, integrated with tools such as Kafka, Zookeeper, and Redis Develop and maintain web crawlers to automate data extraction from various sources, ensuring alignment with user and application requirements Research, design, and implement automation strategies across multiple platforms, tools, and technologies to optimize business processes Monitor, troubleshoot, and resolve issues affecting the performance, reliability, and stability of scraping systems and automation tools Serve as a Subject Matter Expert (SME) for automation systems, providing guidance and support to internal teams Analyze and validate extracted data to ensure accuracy, integrity, and compliance with Cisive’s data standards Define, implement, and enforce data requirements, standards, and best practices to ensure consistent and efficient operations Collaborate with stakeholders and end users to define technical requirements, business goals, and alternative solutions for data collection and reporting Create, manage, and document reports, processes, policies, and project plans, including risk assessments and goal tracking Conduct code reviews, enforce coding standards, and provide technical leadership and mentorship to development team members Proactively identify and mitigate technical risks, recommending improvements in technologies, tools, and processes Drive the adoption of modern development tools, frameworks, and best practices Contribute to strategic planning related to automation initiatives and product development Ensure clear, thorough communication and documentation across teams to support knowledge sharing and training Minimum Qualifications Bachelor’s degree in Computer Science, Software Engineering, or related field. 5+ years of professional software development experience. Strong proficiency in HTML, XML, XPath, XSLT, and Regular Expressions for data extraction and transformation Hands-on experience with Visual Studio Strong proficiency in Python Some experience with C# .NET Solid experience with MS SQL Server, with strong skills in SQL querying and data analysis Experience with web scraping, particularly using the Scrapy framework integrated with Kafka, Zookeeper, and Redis Experience with .NET automation tools such as Selenium Understanding of CAPTCHA-solving services and working with proxy services Experience working in a Linux environment is a plus Highly self-motivated and detail-oriented, with a proactive, goal-driven mindset Strong team player with dependable work habits and well-developed interpersonal skills Excellent verbal and written communication skills Demonstrates willingness and flexibility to adapt schedule when necessary to meet client needs.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies