Jobs
Interviews

16012 Kafka Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Responsibility Data Handling and Processing: •Proficient in SQL Server and query optimization. •Expertise in application data design and process management. •Extensive knowledge of data modelling. •Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Microsoft Fabric. •Experience working with Azure Databricks. •Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). •Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. •Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. •Understanding of data governance, compliance, and security measures within Azure environments. Data Analysis and Visualization: •Experience in data analysis, statistical modelling, and machine learning techniques. •Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. •Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. •Experience in implementing Row-Level Security in Power BI. •Ability to work with medium-complex data models and quickly understand application data design and processes. •Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. •Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Non-Technical Skills: •Ability to lead a team of 4-5 developers and take ownership of deliverables. •Demonstrates a commitment to continuous learning, particularly with new technologies. •Strong communication skills in English, both written and verbal. •Able to effectively interact with customers during project implementation. •Capable of explaining complex technical concepts to non-technical stakeholders. Data Management: SQL, Azure Synapse Analytics, Azure Analysis Service and Data Marts, Microsoft Fabric ETL Tools: Azure Data Factory, Azure Data Bricks, Python, SSIS Data Visualization: Power BI, DAX

Posted 1 day ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.

Posted 1 day ago

Apply

15.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

#HiringAlert Job Role: Principal Java Engineer Location: Hybrid – Kolkata (3–4 days per week in office) Industry: Gaming / Real-time Systems / Technology Employment Type: Full-Time Salary: Based on experience and aligned with industry standards. About the Role We’re a fast-growing start-up in the gaming industry, building high-performance, real-time platforms that power immersive digital experiences. We’re looking for a Principal Java Engineer to lead the design and development of scalable backend systems that support live, data-intensive applications. This is a hybrid role based in Kolkata, ideal for someone who thrives on solving technical challenges, enjoys taking ownership, and wants to build great software in a dynamic, informal, and high-energy environment. Key Responsibilities Design and develop scalable, resilient backend systems using Java (17+) and Spring Boot Architect APIs, microservices, and real-time backend components for gaming platforms Own backend infrastructure, deployment pipelines, monitoring, and system performance Collaborate with product and delivery teams to translate ideas into production-ready features Take full ownership of backend architecture — from planning to delivery and iteration Continuously improve code quality, engineering practices, and overall system design Required Skills & Experience 10–15 years of experience in backend engineering with strong expertise in Java (preferably 17+) and Spring Boot Proven experience building high-performance, distributed systems at scale Hands-on with cloud platforms (AWS, GCP, or Azure), Docker, and Kubernetes Strong understanding of SQL and NoSQL databases, caching (e.g., Redis), and messaging systems (Kafka, RabbitMQ) Solid skills in debugging, performance tuning, and system optimization Ability to work independently, make pragmatic decisions, and collaborate in a hybrid team setup Good to Have Experience in gaming, real-time platforms, or multiplayer systems Familiarity with Web Sockets, telemetry pipelines, or event-driven architecture Exposure to CI/CD pipelines, infrastructure as code, and observability tools Why Join Us? Work in a creative, fast-paced domain that blends engineering depth with product excitement Flat structure and high trust — focus on outcomes, not formalities Visible impact — everything you build will be used by real players in real time Informal, collaborative culture — where we take our work seriously, but not ourselves Flexible hybrid setup — 3 to 4 days a week in-office, with room for focused work and team alignment How to Apply Send your resume or portfolio to : talent@projectpietech.com We’d love to hear from engineers who are passionate about solving hard problems and building something exciting from the ground up. #HiringNow#JavaJobs#BackendEngineer#PrincipalEngineer#JavaDeveloper#SpringBoot #SoftwareEngineering#GamingIndustryJobs#RealTimeSystems#TechJobsIndia#KolkataJobs #EngineeringLeadership#MicroservicesArchitecture#CloudEngineering#JoinOurTeam #StartupJobs#ProjectPieTechnologies#JobAlert#NowHiring#WorkWithUs#CareerOpportunity #HiringEngineersJavaDeveloper#BackendDeveloper#Microservices#SoftwareArchitecture #CloudComputing#DistributedSystems#Kubernetes#Docker#Kafka#AWSJobs#DevOpsEngineering #NoSQL#WebSockets#RealTimeData#LifeAtProjectPie#JoinOurTeam#TechLeadership #InnovationDriven#BuildTheFuture#MakeAnImpact#EngineerTheFuture#TeamCulture#FlatHierarchy

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities Successfully and independently deliver large-size projects, including scoping, planning, design, development, testing, rollout and maintenance. Write clean, concise, modular and well-tested code. Review code from junior engineers and provide constant and constructive feedback. Contribute to building and maintaining documentation related to the team's projects. Create high quality, loosely coupled, reliable and extensible technical designs. Actively understand trade-offs between different designs and apply the solution suited to the situation / requirements. Participate in the team's on-call rotation and lead the troubleshooting and resolution process of any issues related to the services/ work sub-streams/ products owned by your team. Constantly improve the health and quality of the services / code they work on, through set practices and new initiatives. Lead the cross-team collaborations for the projects they work on. Support hiring and on-boarding activities along with coaching and developing junior members in your team, and contribute to knowledge sharing. Must Have Qualifications and Experience: 4-6 years of hands-on experience in designing, developing, testing, and deploying small to mid-scale applications in any language or stack. 2+ years of recent and active software development experience. Good understanding of Golang. Able to use Go concurrency patterns and contribute to building reusable Go components. Strong experience in designing loosely coupled, reliable and extensible distributed services. Great understanding of clean architecture, S.O.L.I.D principles, and event-driven architecture. Experience with message broker services like SQS, Kafka, etc. Strong data modeling experience in Relational databases. Strong cross-team collaboration and communication skills. Self-driven with a passion for learning new things quickly, solving challenging problems, and the drive to get better with the support from the manager. Nice To Have A bachelor degree in computer science, information technology, or equivalent education. Experience with NoSQL databases.

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Senior Software Engineer – Backend (Python) 📍 Location: Hyderabad (Hybrid) 🕒 Experience: 5 – 12 years About the Role: We are looking for a Senior Software Engineer – Backend with strong expertise in Python and modern big data technologies. This role involves building scalable backend solutions for a leading healthcare product-based company. Key Skills: Programming: Python, Spark-Scala, PySpark (PySpark API) Big Data: Hadoop, Databricks Data Engineering: SQL, Kafka Strong problem-solving skills and experience in backend architecture Why Join? Hybrid work model in Hyderabad Opportunity to work on innovative healthcare products Collaborative environment with modern tech stack Keywords for Search: Python, PySpark, Spark, Spark-Scala, Hadoop, Databricks, Kafka, SQL, Backend Development, Big Data Engineering, Healthcare Technology

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Wissen Technology is Hiring for Java + Python Developer About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time. Job Summary: We’re looking for a versatile Java + Python Developer who thrives in backend development and automation. You'll be working on scalable systems, integrating third-party services, and contributing to high-impact projects across fin tech/data platforms/cloud-native applications Experience: 2-10 Years Location: Bengaluru Mode of work: Full time Key Responsibilities: Design, develop, and maintain backend services using Java and Python Build and integrate RESTful APIs, microservices, and data pipelines Write clean, efficient, and testable code across both Java and Python stacks Work on real-time, multithreaded systems and optimize performance Collaborate with DevOps and data engineering teams on CI/CD, deployment, and monitoring Participate in design discussions, peer reviews, and Agile ceremonies Required skills: 2–10 years of experience in software development Strong expertise in Core Java (8+) and Spring Boot Proficient in Python (data processing, scripting, API development) Solid understanding of data structures, algorithms, and multithreading Hands-on experience with REST APIs, JSON, SQL/NoSQL (PostgreSQL, MongoDB, etc.) Familiarity with Git, Maven/Gradle, Jenkins, Agile/Scrum Preferred Skills: Experience with Kafka, RabbitMQ, or message queues Cloud services (AWS, Azure, or GCP) Knowledge of data engineering tools (Pandas, NumPy, PySpark, etc.) Docker/Kubernetes familiarity Exposure to ML/AI APIs or DevOps scripting Wissen Sites: Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Darwinbox: Darwinbox is Asia's fastest-growing HR technology platform, designing the future of work by building the world's best HR tech, driven by a fierce focus on employee experience and customer success, and continuous, iterative innovation. We are the preferred choice of 1000+ global enterprises to manage their 4 million+ employees across 130+ countries. Darwinbox's new-age HCM suite competes with local as well as global players in the enterprise technology space (such as SAP, Oracle, and Workday). The firm has acquired notable customers ranging from large conglomerates to unicorn start-ups: Nivea, Starbucks, DLF, JSW, Adani Group, Crisil, CRED, Vedanta, Mahindra, Glenmark, Gokongwei Group, Mitra Adiperkasa, EFS Facilities Management, VNG Corporation, and many more. Our vision of building a world-class product company from Asia is backed by marquee global investors like Microsoft, Salesforce, Sequoia Capital, TCV, KKR, and Partners Group. Why Join Us? The rate at which our product and market presence are growing is unprecedented. We’re a Rocketship. We’re not planning on slowing down anytime soon. And, that’s why we need you! You’ll experience a culture of: Disproportionate Rewards for top performance Accelerated Growth in a hyper-growth environment Wellbeing First culture focused on employee care Continuous Learning and Professional Development Meaningful Relationships and a Collaborative Environment. Role Overview: We are looking for a highly skilled Engineering Architect to drive our platform's architectural vision, scalability, and reliability. You will work closely with engineering teams to design and implement robust, high-performance, secure solutions that align with our business objectives. Responsibilities: Define and implement the architectural roadmap, ensuring scalability, reliability, and security of the platform. Provide technical leadership and mentorship to development teams across backend and frontend technologies. Design and optimize microservices architecture, improving system performance and resilience. Evaluate and integrate emerging technologies to enhance platform capabilities. Ensure best practices in coding, security, and DevOps across the engineering teams. Collaborate with product managers and stakeholders to align technical decisions with business needs. Optimize cloud infrastructure on AWS and Azure for cost efficiency and performance. Lead technical reviews, troubleshoot complex issues, and provide solutions for performance bottlenecks. Requirements: 10+ years of experience in software engineering, with at least 4+ years in an architectural role. Strong expertise in backend technologies, including PHP, Node.js, and microservices architecture. Proficiency in front-end frameworks like Angular and TypeScript. Experience with MongoDB, database design, and query optimization. Deep understanding of cloud platforms (AWS & Azure) and DevOps best practices. Expertise in designing scalable, distributed systems with high availability. Strong knowledge of API design, authentication, and security best practices. Experience with containerization and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to drive technical decisions. Preferred Qualifications: Experience in CI/CD pipelines and infrastructure as code. Knowledge of event-driven architectures and message queues (SQS, RabbitMQ, Kafka, etc.). Prior experience in a SaaS or enterprise product-based company. Strong leadership and mentoring skills to guide engineering teams.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. ABOUT THE ROLE: We are seeking a talented individual to join our team as a Java Backend Developer. The Java Backend Developer is self-driven and has a holistic, big picture mindset in developing enterprise solutions. In this role, he/she will be responsible for designing modern domain-driven, event-driven Microservices architecture to host on public Cloud platforms (AWS) and integration with modern technologies such as Kafka for event management/streaming, Docker & Kubernetes for Containerization. You will also be responsible for developing and supporting applications in Billing, Collections, and Payment Gateway within the commerce and club management Platform include assisting with the support of existing services as well as designing and implementing new business solutions, application deployment utilizing a thorough understanding of applicable technology, tools, and existing designs. The work involves working with product teams, technical leads, business analysts, DBAs, infrastructure, and other cross-department teams to evaluate business needs and provide end-to-end technical solutions. WHAT YOU’LL DO: Acting as a Java Backend Developer in a development team; collaborate with other team members and contribute in all phases of Software Development Life Cycle (SDLC) Applying Domain Driven Design, Object Oriented Design, and proven Design Patterns Hand on coding and development following Secured Coding guidelines and Test-Driven Development Working with QA teams to conduct integrated (application and database) stress testing, performance analysis and tuning Support systems testing and migration of platforms and applications to production Making enhancements to existing web applications built using Java and Spring frameworks Ensure quality, security and compliance requirements are met Act as an escalation point for application support and troubleshooting Have passion for hands-on coding, putting the customer first, and delivering an exceptional and reliable product to ABC Fitness’s customers Taking up tooling, integrating with other applications, piloting new technology Proof of Concepts and leveraging the outcomes in the ongoing solution initiatives Curious to see where technology and the industry is going and constantly strive to keep up through personal projects Strong analytical skills with high attention to detail, accuracy, and expert in debugging issue, and root cause analysis Strong organizational, multi-tasking, and prioritizing skills WHAT YOU’LL NEED: Computer Science degree or equivalent work experience Work experience as a senior developer in a team environment 3+ years of application development and implementation experience 3+ years of Java experience 3+ years of Spring experience Work experience in an Agile development scrum team space Work experience creating or maintaining RESTful or SOAP web services Work Experience creating and maintaining Cloud enabled/cloud native distributed applications Knowledge of API Gateways and integration frameworks, containers, and container orchestration Knowledge and experience with system application troubleshooting, and quality assurance application testing A focus on delivering outcomes to customers, which encompass designing, coding, ensuring quality, and delivering changes to our customers AND IT’S GREAT TO HAVE: 2+ years of SQL experience Billing or Payment Processing industry experience Knowledge and understanding of DevOps principles Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers Knowledge and understanding of application or software security such as: web application penetration testing, secure code review, secure static code analysis Ability to simultaneously lead multiple projects Good verbal, written, and interpersonal communication skills WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Node.js Developer Location: Remote Experience: 7 Years Employment Type: Full-time Job Summary: We are looking for a highly skilled Senior Node.js Developer with 6+ years of experience in designing, developing, and deploying scalable backend applications. The ideal candidate should have deep expertise in Node.js, Express.js, databases (SQL & NoSQL), and cloud services . You will play a crucial role in architecting solutions, optimizing performance, and ensuring high-quality code. Key Responsibilities: Develop and maintain backend services using Node.js, Express.js, and Nest.js . Design RESTful APIs and integrate third-party services. Implement microservices architecture for scalability and efficiency. Work with databases such as MongoDB, PostgreSQL, MySQL, or Redis . Write efficient, reusable, and testable code following best practices. Optimize applications for performance and scalability . Collaborate with frontend developers, DevOps, and other team members . Implement authentication and authorization mechanisms using JWT, OAuth, or similar technologies. Ensure security best practices in API and backend development. Work with CI/CD pipelines and deployment strategies. Troubleshoot and debug issues in production and staging environments. Write unit and integration tests using Jest, Mocha, or Chai. Required Skills & Qualifications: 6+ years of experience in Node.js backend development . Strong expertise in Express.js, Nest.js, or Koa.js . Proficiency in JavaScript, TypeScript , and modern ES6+ features. Experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis) databases . Knowledge of message queues like RabbitMQ, Kafka, or Redis Pub/Sub. Familiarity with Docker, Kubernetes, and cloud platforms (AWS, GCP, or Azure) . Hands-on experience with GraphQL (Apollo, Hasura) is a plus . Experience in writing unit and integration tests . Strong problem-solving and debugging skills. Excellent understanding of asynchronous programming and event-driven architectures . Familiarity with DevOps practices and CI/CD pipelines. Preferred Skills: Experience with Serverless Frameworks (AWS Lambda, Firebase Functions) . Knowledge of WebSockets and real-time communication . Exposure to Terraform, Ansible, or other Infrastructure as Code (IaC) tools . Experience with performance monitoring tools like Prometheus, Grafana, or Datadog .

Posted 1 day ago

Apply

11.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience with architecture and development in Java 8 or higher. Experience with front-end frameworks such as React, Redux, Angular, or Vue. Familiarity with Node.js and modern backend stacks. Deep knowledge of AWS, Azure, or GCP platforms and services. Hands-on experience with CI/CD pipelines, containerization (Docker, Kubernetes), and microservices. Deep understanding of design patterns, data structures, and microservices architecture. Strong knowledge of object-oriented programming, data structures, and algorithms. Experience with scalable system design, performance tuning, and application security. Experience integrating with SAP ERP systems, Net Revenue Management platforms, and O9 Familiarity with data integration patterns, middleware, and message brokers (e.g., Kafka, RabbitMQ). A good understanding of UML and design patterns. Excellent communication and stakeholder management skills. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design & Build high quality APIs that are scalable and global at the core Build custom policies, frameworks/components, error handling, transaction tracing Setup Exchange catalogue orgs and assets in Any Point Platform. Setup security models and policies for consumers and producers of API and catalog assets Work across various platforms and the associated stakeholders’/business users Design, develop, test and implement technical solutions based on business requirements and strategic direction. Collaborate with other Development teams, Enterprise Architecture and Support teams to design, develop, test and maintain the various platforms and their integration with other systems Communicate with technical and non-technical groups on a regular basis as part of product/project support Responsible to support production releases/support on need basis. Peer Review, CI/CD pipeline implementation and Service monitoring. ITSO Delegate for the application/s. Should have flexible in working hours, ready to work in shift and On call once in a month 24*7 one week on-call production support including weekends. Requirements To be successful in this role, you should meet the following requirements: Person should have more than 8 years of experience in s/w development, design using java/j2ee technologies with hands on experience on complete spring stack and API implementation on Cloud (GCP/AWS) Should have hands on experience on K8 (Kubernetes) / DOCKERS. Experience in MQ, Sonar, API Gateway Experience in developing large-scale integration and API solutions Experience in working with API Management, ARM, Exchange and Access Management modules Experience in understanding and analyzing complex business requirements and carry out the system design accordingly. Extensive knowledge on building REST based APIs. Good Knowledge on API documentation (RAML/Swagger/OAS) Extensive knowledge on micro-services architecture with hands-on experience in implementing the same using Spring-boot. Good knowledge on security, scaling, performance tuning aspects of micro services Good understanding of SQL/NoSQL Databases. Good understanding of Messaging platform like Kafka, PubSub etc. Optional understanding of Cloud platforms. Fair understanding of DevOps concepts Experience in creating custom policies and custom connectors Excellent verbal and written communication skills, both technical and non-technical. Work on POCs Experience to handle the support projects. Spring boot, ORM tool knowledge (e.g. Hibernate), Web Services You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 day ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: As a Staff Software Engineer, you are the lead to the teams creating Enterprise Integration solutions for BP colleagues and external users. Your team’s mission is to be the digital provider of choice to your area of BP – delivering innovation at speed where it's wanted, and day-in-day-out reliability where it's needed. You will operate in a dynamic and commercially focussed environment, with the resources of one of the world's largest Digital organisations and leading Digital and IT vendors working with you. You will be part of growing and strengthening our technical talent base – experts coming together to solve BP and the world’s problems. Key Accountabilities Delivery of stable and efficient Integration solutions including implementing new solutions and technical debt management/remediation of existing platforms. We believe in DevOps – you build it, you run it! Ensure Integration Services in scope of role evolve in response to changing business needs, technology developments and maintain alignment to bp standard operating environments and emerging technologies Working with functional stakeholders, project managers and business analysts to understand requirements Lead a team of Integration engineers promoting a culture of agility, continuous improvement and embrace opportunities provided through increased automation. Maximise value from current applications and emerging technologies showing technical thought leadership in their business area across a wide range of technologies. Works with users and business analysts to understand requirements. Collaborates with peers across I&E teams and mentors more junior engineers Work location Pune Years of experience: 15+ years, with a minimum of 10 years of relevant experience. Required Criteria Expert in Java, integration frameworks, should be able to design highly scalable integrations which involves with API, Messaging, Files, Databases, and cloud services Experienced in leading multiple technology squads of engineers. Experienced in Integration tools like TIBCO/MuleSoft, Apache Camel/ Spring Integration, Confluent Kafka...etc. Expert in Enterprise Integration Patterns (EIPs) and iBlocks to build secure integrations Willingness and ability to learn, to become skilled in at least one more cloud-native (AWS and Azure) integration solutions on top of your existing skillset. Deep understanding of the Interface development lifecycle, including design, security, design patterns for extensible and reliable code, automated unit and functional testing, CI/CD and telemetry Strong experience in open-source technologies and able to adopt AI assisted development. Experienced in enterprise integrations, EDA, and Micro Services Architecture Strong inclusive leadership and people management Stakeholder Management Embrace a culture of continuous improvement Preferred Criteria Agile methodologies ServiceNow Risk Management AI assisted DevOps Monitoring and telemetry tools like Grafana, Open Telemetry User Experience Analysis Cybersecurity and compliance About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Additional Information We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agile Methodology, Agile Methodology, Agility core practices, Analytics, API and platform design, API Development, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Enterprise Integration Patterns, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting {+ 7 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Description Senior Data Engineer Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk . You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quality testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (We use Snowflake) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed “production-grade” projects with dbt Expert knowledge in python What Does Our Data Stack Looks Like ELT (Snowflake, Fivetran, dbt, Airflow, Kafka, HighTouch) BI (Tableau, Looker) Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions) Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary: Senior Engineer 2 (SDET) Location: New Delhi Division: Ticketmaster Sport International Engineering Line Manager: Andrew French Contract Terms: Permanent THE TEAM Ticketmaster Sport is the global leader in sports ticketing. From the smallest clubs to the biggest leagues and tournaments, we are trusted as their ticketing partner. You will be joining the Ticketmaster Sports International Engineering division which is dedicated to the creation and maintenance of industry standard ticketing software solutions. Our software is relied upon by our clients to manage and sell their substantial ticketing inventories. Our clients include some of the highest profile clubs and organisations in sport. Reliability, quality, and performance are expected by our clients. We provide an extensive catalogue of hosted services including back-office tooling, public-facing web sales channels, and other services and APIs. The team you will join is closely involved in all these areas. The Ticketmaster Sports International Engineering division comprises distributed software development teams working together in a highly collaborative environment. You will be joining our expanding engineering team based in New Delhi. THE JOB You will be joining a Microsoft .Net development team as a Senior Quality Assurance Engineer. The team you will be joining is responsible for data engineering in the Sport platform. This includes developing back-end systems which integrate with other internal Ticketmaster systems, as well as with our external business partners. You will be required to work with event-driven systems, message queueing, API development, and much more besides. There is a tremendous opportunity for you to make a difference. We are looking for QA engineers who can help us drive our platform forward from a quality assurance point of view, as well as act as a mentor for more junior members of the team. You will be working very closely with the team lead to ensure the quality of our software and to assist in the planning and decision-making process. Apart from standard manual testing activities you will help improve our automated test suites, as well as be involved with performance testing. In essence, your job will be to ensure our software solutions are of the highest quality, robustness, and performance. What You Will Be Doing Design, build, and maintain scalable and reusable test automation frameworks using C# .Net and Selenium. Collaborate with developers, product managers, and QA to understand requirements and build comprehensive test plans. Defining, developing, and implementing quality assurance practices and procedures and test plans. Create, execute, and maintain automated functional, regression, integration and performance tests. Ensure high code quality and testing standards across the team through code reviews and best practices. Investigate test failures, diagnose bugs and file detailed bug reports Producing test and quality reports. Integrate test automation with CI/CD pipelines (GitLab, Azure Devops, Jenkins). Operating effectively within an organisation with teams spread across the globe. Working effectively within a dynamic team environment to define and advocate for QA standards and best practices to ensure the highest level of quality. Technical Skills Must have: 5+ years of experience in test automation development, preferably in an SDET role Strong hands-on experience with C# .Net and Selenium Webdriver. Experience in tools like NUnit, Specflow, or similar test libraries. Solid understanding of object-oriented programming (OOP) and software design principles. Experience developing and maintaining custom automation frameworks from scratch. Proficiency in writing clear, concise and comprehensive test cases and test plans. Experience of working in scrum teams within Agile methodology. Experience in developing regression and functional test plans, managing defects. Understand Business requirements and identify scenarios of Automated and manual testing Experience in performance testing using Gatling. Experience working with Git CI/CD pipelines. Experience with web service e.g. RESTful services testing including test automation with Rest Assured/Postman. Be proficient working with relational databases such as MSSQL or other relational databases. A deep understanding of Web protocols and standards (e.g. HTTP, REST). Strong problem-solving mindset and a detail-oriented mindset. Nice to have: Exposure to performance testing tools Testing enterprise applications deployed to cloud environments such as AWS. Experience on static code analysis tools like SonarQube etc. Building test infrastructures using containerisation technologies such as Docker and working with continuous delivery or continuous release pipelines. Experience in microservice development. Experience with Octopus Deploy. Experience with TestRail. Experience with event-driven architectures, messaging patterns and practices. Experience with Kafka, AWS SQS or other similar technologies. You (behavioural Skills) Excellent communication and interpersonal skills. We work with people all over the Globe using English as a shared language. As a senior engineer you will be expected to help managers make decisions by describing problems and proposing solutions. To be able to respond positively to challenge. Excellent problem-solving skills. Desire to take on responsibility and to grow as a quality assurance software engineer. Enthusiasm for technology and a desire to communicate that to your fellow team members. The ability to pick up any ad-hoc technology and run with it. Continuous curiosity for new technologies on the horizon. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities.

Posted 1 day ago

Apply

7.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Job Title: Senior Software Development Engineer (Sr. SDE) Location: Noida About Us: At Clearwater Analytics, we are on a mission to become the world's most trusted and comprehensive technology platform for investment management, reporting, accounting, and analytics. We partner with sophisticated institutional investors worldwide and are seeking a Software Development Engineer who shares our passion for innovation and client commitment. Role Overview: We are seeking a skilled Software Development Engineer with strong coding and design skills, as well as hands-on experience in cloud technologies and distributed architecture. This role focuses on delivering high-quality software solutions within the FinTech sector, particularly in the Front Office, OEMS, PMS, and Asset Management domains. Key Responsibilities: Design and develop scalable, high-performance software solutions in a distributed architecture environment. Collaborate with cross-functional teams to ensure engineering strategies align with business objectives and client needs. Implement real-time and asynchronous systems with a focus on event-driven architecture. Ensure operational excellence by adhering to best practices in software development and engineering. Present technical concepts and project updates clearly to stakeholders, fostering effective communication. Requirements: 7 - 10 years of hands-on experience in software development, ideally within the FinTech sector. Strong coding and design skills, with a solid understanding of software development principles. Deep expertise in cloud platforms (AWS/GCP/Azure) and distributed architecture. Experience with real-time systems, event-driven architecture, and engineering excellence in a large-scale environment. Proficiency in Java and familiarity with messaging systems (JMS/Kafka/MQ). Excellent verbal and written communication skills. Desired Qualifications: Experience in the FinTech sector, particularly in Front Office, OEMS, PMS, and Asset Management at scale. Bonus: Experience with BigTech, Groovy, Bash, Python, and knowledge of GenAI/AI technologies. What we offer: Business casual atmosphere in a flexible working environment Team-focused culture that promotes innovation and ownership Access cutting-edge investment reporting technology and expertise Defined and undefined career pathways, allowing you to grow your way Competitive medical, dental, vision, and life insurance benefits Maternity and paternity leave Personal Time Off and Volunteer Time Off to give back to the community RSUs, as well as an employee stock purchase plan and a 401 (k) with a match Work from anywhere 3 weeks out of the year Work from home Fridays Why Join Us? This is an incredible opportunity to be part of a dynamic engineering team that is shaping the future of investment management technology. If you're ready to make a significant impact and advance your career, apply now!

Posted 1 day ago

Apply

20.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. Job Description We are seeking a seasoned Senior Director of Software Engineering with deep expertise in Data Platforms to lead and scale our data engineering organization. With deep industry experience, you will bring strategic vision, technical leadership, and operational excellence to drive innovation and deliver robust, scalable, and high-performing data solutions. You will partner closely with cross-functional teams to enable data-driven decision-making across the enterprise. Key Responsibilities Define and execute the engineering strategy for modern, scalable data platforms. Lead, mentor, and grow a high-performing engineering organization. Partner with product, architecture, and infrastructure teams to deliver resilient data solutions. Drive technical excellence through best practices in software development, data modeling, security, and automation. Oversee the design, development, and deployment of data pipelines, lakehouses, and real-time analytics platforms. Ensure platform reliability, availability, and performance through proactive monitoring and continuous improvement. Foster a culture of innovation, ownership, and continuous learning. 20+ years of experience in software engineering with a strong focus on data platforms and infrastructure. Proven leadership of large-scale, distributed engineering teams. Deep understanding of modern data architectures (e.g., data lakes, lakehouses, streaming, warehousing). Proficiency in cloud-native data platforms (e.g., AWS, Azure, GCP), big data ecosystems (e.g., Spark, Kafka, Hive), and data orchestration tools. Strong software development background with expertise in one or more languages such as Python, Java, or Scala. Demonstrated success in driving strategic technical initiatives and cross-functional collaboration. Strong communication and stakeholder management skills at the executive level. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (Ph.D. a plus). Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com

Posted 1 day ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Your IT Future, Delivered. Senior Software Engineer (AI/ML Engineer) With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our offices have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At DHL IT Services, we are designing, building and running IT solutions for the whole DPDHL globally. Grow together. The AI & Analytics team builds and runs solutions to get much more value out of our data. We help our business colleagues all over the world with machine learning algorithms, predictive models and visualizations. We manage more than 46 AI & Big Data Applications, 3.000 active users, 87 countries and up to 100,000,000 daily transaction. Integration of AI & Big Data into business processes to compete in a data driven world needs state of the art technology. Our infrastructure, hosted on-prem and in the cloud (Azure and GCP), includes MapR, Airflow, Spark, Kafka, jupyter, Kubeflow, Jenkins, GitHub, Tableau, Power BI, Synapse (Analytics), Databricks and further interesting tools. We like to do everything in an Agile/DevOps way. No more throwing the “problem code” to support, no silos. Our teams are completely product oriented, having end to end responsibility for the success of our product. Ready to embark on the journey? Here’s what we are looking for: Currently, we are looking for AI / Machine Learning Engineer . In this role, you will have the opportunity to design and develop solutions, contribute to roadmaps of Big Data architectures and provide mentorship and feedback to more junior team members. We are looking for someone to help us manage the petabytes of data we have and turn them into value. Does that sound a bit like you? Let’s talk! Even if you don’t tick all the boxes below, we’d love to hear from you; our new department is rapidly growing and we’re looking for many people with the can-do mindset to join us on our digitalization journey. Thank you for considering DHL as the next step in your career – we do believe we can make a difference together! What will you need? University Degree in Computer Science, Information Systems, Business Administration, or related field. 2+ years of experience in the Data Scienctist / Machine Learning Engineer role Strong analytic skills related to working with structured, semi structured and unstructured datasets. Advanced Machine learning techniques: Decision Trees, Random Forest, Boosting Algorithm, Neural Networks, Deep Learning, Support Vector Machines, Clustering, Bayesian Networks, Reinforcement Learning, Feature Reduction / engineering, Anomaly deduction, Natural Language Processing (incl. sentiment analysis, Topic Modeling), Natural Language Generation. Statistics / Mathematics: Data Quality Analysis, Data identification, Hypothesis testing, Univariate / Multivariate Analysis, Cluster Analysis, Classification/PCA, Factor Analysis, Linear Modeling, Time Series, distribution / probability theory and/or Strong experience in specialized analytics tools and technologies (including, but not limited to) Lead the integration of large language models into AI applications. Very good in Python Programming. Power BI, Tableau Develop the application and deploy the model in production. Kubeflow, ML Flow, Airflow, Jenkins, CI/CD Pipeline. As an AI/ML Engineer, you will be responsible for developing applications and systems that leverage AI tools, Cloud AI services, and Generative AI models. Your role includes designing cloud-based or on-premises application pipelines that meet production-ready standards, utilizing deep learning, neural networks, chatbots, and image processing technologies. Professional & Technical Skills: Essential Skills: Expertise in Large Language Models. Strong knowledge of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Practical experience with various machine learning algorithms, including linear regression, logistic regression, decision trees, and clustering techniques. Proficient in data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Awareness of Apache Spark, Hadoop Awareness of Agile / Scrum ways of working. Identify the right modeling approach(es) for given scenario and articulate why the approach fits. Assess data availability and modeling feasibility. Review interpretation of models results. Experience in Logistic industry domain would be added advantage. Roles & Responsibilities: Act as a Subject Matter Expert (SME). Collaborate with and manage team performance. Make decisions that impact the team. Work with various teams and contribute to significant decision-making processes. Provide solutions to challenges that affect multiple teams. Lead the integration of large language models into AI applications. Research and implement advanced AI techniques to improve system performance. Assist in the development and deployment of AI solutions across different domains. You should have: Certifications in some of the core technologies. Ability to collaborate across different teams/geographies/stakeholders/levels of seniority. Customer focus with an eye on continuous improvement. Energetic, enthusiastic and results-oriented personality. Ability to coach other team members, you must be a team player! Strong will to overcome the complexities involved in developing and supporting data pipelines. Language requirements: English – Fluent spoken and written (C1 level) An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications.

Posted 1 day ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Backfill leveraging a split of a position into two roles. Former employee was Lukas Ergt based in Prague. Shift of roles to India. HCGT #4605 is other position. Current Employees apply HERE Current Contingent Workers apply HERE Secondary Language(s) Job Description Senior Manager, AI Insights Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our companys’ IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Are you driven to create real-world impact through applied AI, modern analytics, and intelligent automation and passionate about shaping the future of data and analytics in AI era? We are looking for an experienced and forward-thinking Senior AI Engineer / Full-Stack Developer in Data & Analytics to join our global team to help with our transformation from BI to AI. This role combines deep expertise in data engineering, BI/visualization, and AI/GenAI with a hands-on builder mindset. You will lead the design and delivery of next-generation analytics solutions , integrating semantic layers, streaming data, embedded insights, and AI agents. The ideal candidate is equally comfortable writing code, shaping architecture, experimenting with GenAI models, and mentoring teams—bridging BI, ML, engineering in a highly regulated, global pharma environment. What Will You Do In This Role Architect and build data and analytics solutions that align with our transofrmation from BI to AI focusing on Semantic models, governed self-service, and report cataloging Embedded analytics, feedback loops, and personalized insights AI/GenAI orchestration, including RAG, prompt chaining, and Copilot integrations Design and deliver data products and microservices combining BI tools (Power BI, Qlik, ThoughtSpot) Vector databases and LLM-powered APIs Real-time streaming and telemetry (Kafka, Fabric, Snowflake, Databricks) Build and deploy AI models for use cases like Semantic search and insight generation Report summarization and auto-commentary Decision-making agents and anomaly detection Collaborate across data engineering, business, product, and compliance teams to align on technical architecture, governance, and platform scalability. Lead proof-of-concepts, technical pilots, and full-stack AI solutions from ideation to production. Contribute to open discussions and communities of practice around ML Ops, AI tooling, metadata modeling, and observability. Continuously scan the AI/LLM landscape and identify innovative approaches that bring business value. What Should You Have Master degree in Computer Science, Data Engineering, or a related field. 10+ years of experience in data/analytics/AI roles, with demonstrated ability to build complex solutions end-to-end. Deep expertise in Python, SQL, and modern data engineering stacks (e.g., dbt, Snowflake, Databricks, Azure/AWS). Proficiency in AI/ML fundamentals neural networks, vector embeddings, prompt engineering, transformers, evaluation metrics. Hands-on experience with LLMs and GenAI frameworks (GPT-3.5/4, Llama, Claude, RAG, LangChain, Haystack, etc.). Experience building and deploying REST APIs, integrating AI into BI tools or operational systems. Strong background in BI platforms such as Power BI, Qlik, and ThoughtSpot, with experience in data modeling, DAX/set analysis, and performance tuning. Excellent communication skills; able to explain technical designs and model decisions to technical and non-technical stakeholders. Nice to have Experience with agent orchestration platforms, Copilot development, or autonomous decision agents. Familiarity with pharma or life sciences domain and its regulatory requirements (HIPAA, GxP, GDPR). Contributions to open-source AI/analytics tools or frameworks. Experience in multi-tenant SaaS architecture, metadata governance, or telemetry collection for usage-based optimization. Certifications in cloud (AWS, Azure), data engineering, or GenAI technologies. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Modeling, Design Applications, Release Management, Requirements Management, Solution Architecture, System Designs, Systems Integration Preferred Skills Job Posting End Date 08/15/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R356862

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Job Summary: Senior Engineer 2 (SDET) Location: New Delhi Division: Ticketmaster Sport International Engineering Line Manager: Andrew French Contract Terms: Permanent THE TEAM Ticketmaster Sport is the global leader in sports ticketing. From the smallest clubs to the biggest leagues and tournaments, we are trusted as their ticketing partner. You will be joining the Ticketmaster Sports International Engineering division which is dedicated to the creation and maintenance of industry standard ticketing software solutions. Our software is relied upon by our clients to manage and sell their substantial ticketing inventories. Our clients include some of the highest profile clubs and organisations in sport. Reliability, quality, and performance are expected by our clients. We provide an extensive catalogue of hosted services including back-office tooling, public-facing web sales channels, and other services and APIs. The team you will join is closely involved in all these areas. The Ticketmaster Sports International Engineering division comprises distributed software development teams working together in a highly collaborative environment. You will be joining our expanding engineering team based in New Delhi. THE JOB You will be joining a Microsoft .Net development team as a Senior Quality Assurance Engineer. The team you will be joining is responsible for data engineering in the Sport platform. This includes developing back-end systems which integrate with other internal Ticketmaster systems, as well as with our external business partners. You will be required to work with event-driven systems, message queueing, API development, and much more besides. There is a tremendous opportunity for you to make a difference. We are looking for QA engineers who can help us drive our platform forward from a quality assurance point of view, as well as act as a mentor for more junior members of the team. You will be working very closely with the team lead to ensure the quality of our software and to assist in the planning and decision-making process. Apart from standard manual testing activities you will help improve our automated test suites, as well as be involved with performance testing. In essence, your job will be to ensure our software solutions are of the highest quality, robustness, and performance. WHAT YOU WILL BE DOING Design, build, and maintain scalable and reusable test automation frameworks using C# .Net and Selenium. Collaborate with developers, product managers, and QA to understand requirements and build comprehensive test plans. Defining, developing, and implementing quality assurance practices and procedures and test plans. Create, execute, and maintain automated functional, regression, integration and performance tests. Ensure high code quality and testing standards across the team through code reviews and best practices. Investigate test failures, diagnose bugs and file detailed bug reports Producing test and quality reports. Integrate test automation with CI/CD pipelines (GitLab, Azure Devops, Jenkins). Operating effectively within an organisation with teams spread across the globe. Working effectively within a dynamic team environment to define and advocate for QA standards and best practices to ensure the highest level of quality. TECHNICAL SKILLS Must have: 5+ years of experience in test automation development, preferably in an SDET role Strong hands-on experience with C# .Net and Selenium Webdriver. Experience in tools like NUnit, Specflow, or similar test libraries. Solid understanding of object-oriented programming (OOP) and software design principles. Experience developing and maintaining custom automation frameworks from scratch. Proficiency in writing clear, concise and comprehensive test cases and test plans. Experience of working in scrum teams within Agile methodology. Experience in developing regression and functional test plans, managing defects. Understand Business requirements and identify scenarios of Automated and manual testing Experience in performance testing using Gatling. Experience working with Git CI/CD pipelines. Experience with web service e.g. RESTful services testing including test automation with Rest Assured/Postman. Be proficient working with relational databases such as MSSQL or other relational databases. A deep understanding of Web protocols and standards (e.g. HTTP, REST). Strong problem-solving mindset and a detail-oriented mindset. Nice to have: Exposure to performance testing tools Testing enterprise applications deployed to cloud environments such as AWS. Experience on static code analysis tools like SonarQube etc. Building test infrastructures using containerisation technologies such as Docker and working with continuous delivery or continuous release pipelines. Experience in microservice development. Experience with Octopus Deploy. Experience with TestRail. Experience with event-driven architectures, messaging patterns and practices. Experience with Kafka, AWS SQS or other similar technologies. YOU (BEHAVIOURAL SKILLS) Excellent communication and interpersonal skills. We work with people all over the Globe using English as a shared language. As a senior engineer you will be expected to help managers make decisions by describing problems and proposing solutions. To be able to respond positively to challenge. Excellent problem-solving skills. Desire to take on responsibility and to grow as a quality assurance software engineer. Enthusiasm for technology and a desire to communicate that to your fellow team members. The ability to pick up any ad-hoc technology and run with it. Continuous curiosity for new technologies on the horizon. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 321741BR Job Type Full Time Your role develop and maintain solutions focused on data processing and analysis using java 11 and any distributed computing engine collaborate with stakeholders to integrate complex analytics into the platform optimize and enhance existing platform components to improve performance, scalability, and reliability work closely with cross-functional teams to implement new features that support portfolio reporting and data modeling. contribute to system architecture discussions and promote best practices in coding, testing and deployment Your team You'll be working in the Wealth Management IT team in Pune. You will be part of a global team that supports applications in our Core Operations arena working closely with the development and business teams that are global as well. Your expertise 10+ years of experience in software development, preferably in the financial or data analytics industry proficiency in java 17 with strong understanding of modern java practices and performance optimizations experience with apache storm or other distributed processing frameworks (e.g., spark, flink, kafka streams) understanding of cloud platforms, particularly microsoft azure including services like, azure data lake, azure blob storage, azure data factory spark (or any other distributed big data processing engine – flink, trino etc.) aks (or any other kubernetes provider) familiarity with ci/cd pipelines and cloud environments strong problem-solving skills and ability to work on a fast-paced dynamic environment About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 day ago

Apply

7.0 - 9.0 years

6 - 8 Lacs

Hyderābād

On-site

General information Country India State Telangana City Hyderabad Job ID 45479 Department Development Description & Requirements Senior Java Developer is responsible for architecting and developing advanced Java solutions. This role involves leading the design and implementation of microservice architectures with Spring Boot, optimizing services for performance and scalability, and ensuring code quality. The Senior Developer will also mentor junior developers and collaborate closely with cross-functional teams to deliver comprehensive technical solutions. Essential Duties: Lead the development of scalable, robust, and secure Java components and services. Architect and optimize microservice solutions using Spring Boot. Translate customer requirements into comprehensive technical solutions. Conduct code reviews and maintain high code quality standards. Optimize and scale microservices for performance and reliability. Collaborate effectively with cross-functional teams to innovate and develop solutions. Experience in leading projects and mentoring engineers in best practices and innovative solutions. Coordinate with customer and client-facing teams for effective solution delivery. Basic Qualifications: Bachelor’s degree in Computer Science or a related field. 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Preferred Qualifications Experience with Spark using Spring Boot. Familiarity with the C4 Software Architecture Model. Experience using tools like Lucidchart for architecture and flow diagrams. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 1 day ago

Apply

12.0 years

5 - 9 Lacs

Hyderābād

On-site

Job Description Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS.The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies