Jobs
Interviews

16012 Kafka Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

60 - 70 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 10.00 + years Salary : INR 6000000-7000000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Neotas) (*Note: This is a requirement for one of Uplers' client - Neotas) What do you need for this opportunity? Must have skills required: Problem Solving, AWS, Python, Fast API Neotas is Looking for: Key Responsibilities: Architect and design scalable, resilient, and secure SaaS solutions for 24x7 mission-critical applications. Lead greenfield product development as well as modernization of legacy systems. Collaborate with engineering, product, and customer teams to align architecture with business goals. Guide teams in the adoption of modern tools, frameworks, and technologies across the stack. Provide technical leadership and mentorship while remaining hands-on with code. Define and enforce best practices around coding, design patterns, DevOps, and system performance. Engage in high-stakes technical discussions with customers and partners, articulating solutions clearly. Drive architectural decisions for systems involving AI/ML pipelines, data-intensive operations, and real-time analytics. Requirements: Minimum 8+ years of relevant hands-on development experience across backend, APIs, and architecture (preferably with complex B2B SaaS platforms). Proven experience building and scaling mission-critical, high-availability platforms using Python, Fast API and AWS. Strong experience in both greenfield application development and legacy modernization. Exposure to or experience working with AI/ML platforms, models, or data pipelines. Background working in startups or fast-scaling tech environments. Deep understanding of system design, distributed systems, microservices, APIs, and cloud-native architectures. Outstanding communication skills with the ability to lead customer-facing technical meetings and influence stakeholders. Strong problem-solving and decision-making skills, with a product-oriented mindset. Nice to Have: Familiarity with tools and technologies like Kubernetes, Kafka, Elasticsearch, or similar. Experience with observability, monitoring, and performance optimization at scale. Contributions to open source or tech communities. Interview Process - Technical Round 1 - Internal or external Technical Round 2 - internal (with Tech Head) HR Round How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in Java & SQL 2+ years experience with Cloud technology: GCP, AWS, or Azure 2+ years experience designing and developing cloud-native solutions 2+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 3+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.

Posted 2 days ago

Apply

10.0 years

60 - 70 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 10.00 + years Salary : INR 6000000-7000000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Neotas) (*Note: This is a requirement for one of Uplers' client - Neotas) What do you need for this opportunity? Must have skills required: Problem Solving, AWS, Python, Fast API Neotas is Looking for: Key Responsibilities: Architect and design scalable, resilient, and secure SaaS solutions for 24x7 mission-critical applications. Lead greenfield product development as well as modernization of legacy systems. Collaborate with engineering, product, and customer teams to align architecture with business goals. Guide teams in the adoption of modern tools, frameworks, and technologies across the stack. Provide technical leadership and mentorship while remaining hands-on with code. Define and enforce best practices around coding, design patterns, DevOps, and system performance. Engage in high-stakes technical discussions with customers and partners, articulating solutions clearly. Drive architectural decisions for systems involving AI/ML pipelines, data-intensive operations, and real-time analytics. Requirements: Minimum 8+ years of relevant hands-on development experience across backend, APIs, and architecture (preferably with complex B2B SaaS platforms). Proven experience building and scaling mission-critical, high-availability platforms using Python, Fast API and AWS. Strong experience in both greenfield application development and legacy modernization. Exposure to or experience working with AI/ML platforms, models, or data pipelines. Background working in startups or fast-scaling tech environments. Deep understanding of system design, distributed systems, microservices, APIs, and cloud-native architectures. Outstanding communication skills with the ability to lead customer-facing technical meetings and influence stakeholders. Strong problem-solving and decision-making skills, with a product-oriented mindset. Nice to Have: Familiarity with tools and technologies like Kubernetes, Kafka, Elasticsearch, or similar. Experience with observability, monitoring, and performance optimization at scale. Contributions to open source or tech communities. Interview Process - Technical Round 1 - Internal or external Technical Round 2 - internal (with Tech Head) HR Round How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

10.0 years

60 - 70 Lacs

Chandigarh, India

Remote

Experience : 10.00 + years Salary : INR 6000000-7000000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Neotas) (*Note: This is a requirement for one of Uplers' client - Neotas) What do you need for this opportunity? Must have skills required: Problem Solving, AWS, Python, Fast API Neotas is Looking for: Key Responsibilities: Architect and design scalable, resilient, and secure SaaS solutions for 24x7 mission-critical applications. Lead greenfield product development as well as modernization of legacy systems. Collaborate with engineering, product, and customer teams to align architecture with business goals. Guide teams in the adoption of modern tools, frameworks, and technologies across the stack. Provide technical leadership and mentorship while remaining hands-on with code. Define and enforce best practices around coding, design patterns, DevOps, and system performance. Engage in high-stakes technical discussions with customers and partners, articulating solutions clearly. Drive architectural decisions for systems involving AI/ML pipelines, data-intensive operations, and real-time analytics. Requirements: Minimum 8+ years of relevant hands-on development experience across backend, APIs, and architecture (preferably with complex B2B SaaS platforms). Proven experience building and scaling mission-critical, high-availability platforms using Python, Fast API and AWS. Strong experience in both greenfield application development and legacy modernization. Exposure to or experience working with AI/ML platforms, models, or data pipelines. Background working in startups or fast-scaling tech environments. Deep understanding of system design, distributed systems, microservices, APIs, and cloud-native architectures. Outstanding communication skills with the ability to lead customer-facing technical meetings and influence stakeholders. Strong problem-solving and decision-making skills, with a product-oriented mindset. Nice to Have: Familiarity with tools and technologies like Kubernetes, Kafka, Elasticsearch, or similar. Experience with observability, monitoring, and performance optimization at scale. Contributions to open source or tech communities. Interview Process - Technical Round 1 - Internal or external Technical Round 2 - internal (with Tech Head) HR Round How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

10.0 years

60 - 70 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 10.00 + years Salary : INR 6000000-7000000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Neotas) (*Note: This is a requirement for one of Uplers' client - Neotas) What do you need for this opportunity? Must have skills required: Problem Solving, AWS, Python, Fast API Neotas is Looking for: Key Responsibilities: Architect and design scalable, resilient, and secure SaaS solutions for 24x7 mission-critical applications. Lead greenfield product development as well as modernization of legacy systems. Collaborate with engineering, product, and customer teams to align architecture with business goals. Guide teams in the adoption of modern tools, frameworks, and technologies across the stack. Provide technical leadership and mentorship while remaining hands-on with code. Define and enforce best practices around coding, design patterns, DevOps, and system performance. Engage in high-stakes technical discussions with customers and partners, articulating solutions clearly. Drive architectural decisions for systems involving AI/ML pipelines, data-intensive operations, and real-time analytics. Requirements: Minimum 8+ years of relevant hands-on development experience across backend, APIs, and architecture (preferably with complex B2B SaaS platforms). Proven experience building and scaling mission-critical, high-availability platforms using Python, Fast API and AWS. Strong experience in both greenfield application development and legacy modernization. Exposure to or experience working with AI/ML platforms, models, or data pipelines. Background working in startups or fast-scaling tech environments. Deep understanding of system design, distributed systems, microservices, APIs, and cloud-native architectures. Outstanding communication skills with the ability to lead customer-facing technical meetings and influence stakeholders. Strong problem-solving and decision-making skills, with a product-oriented mindset. Nice to Have: Familiarity with tools and technologies like Kubernetes, Kafka, Elasticsearch, or similar. Experience with observability, monitoring, and performance optimization at scale. Contributions to open source or tech communities. Interview Process - Technical Round 1 - Internal or external Technical Round 2 - internal (with Tech Head) HR Round How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Are you a backend developer who thrives on building scalable services, working with cloud-native architectures, and solving complex technical problems? We're looking for a Senior Node.js Developer to work on cutting-edge digital banking platforms for one of the largest financial institutions in the UAE . Why Join Us? This role is part of a long-term digital transformation initiative that impacts millions of users. You'll be working in a collaborative Agile environment, delivering mission-critical services with a team that values performance, quality, and innovation. Key Responsibilities: Design, develop, and maintain scalable RESTful APIs using Node.js Work hands-on with GraphQL for real-time and flexible API consumption Integrate with API gateways (e.g., 3Scale) and SSO/token-based authentication mechanisms Implement containerization using Docker and Kubernetes, optimizing for cloud environments Manage task/message queues (e.g., Kafka, AWS SQS, Azure Queues) for async processing Collaborate with frontend, DevOps, and QA teams to ensure seamless delivery Participate in performance tuning, monitoring, and debugging in distributed systems Contribute to CI/CD processes and deployment automation pipelines Ensure best practices in coding, testing, security, and documentation Research and benchmark new technologies to maintain competitive edge Required Qualifications: Degree or postgrad in Computer Science, Engineering, or related field Minimum 5 years of hands-on backend development experience in Node.js & JavaScript At least 1 year of experience with TypeScript Proficient with API design, microservices architecture, and database integration Familiarity with Agile methodologies and sprint-based delivery Preferred Experience: Exposure to banking, fintech, or high-compliance enterprise environments Understanding of regulatory and data security standards in financial services Experience working in a DevOps culture with continuous integration and cloud infrastructure Soft Skills & Traits: Technically sound with the ability to influence architectural decisions Proactive, self-driven, and takes ownership end-to-end Strong communication skills for collaboration with cross-functional teams and senior stakeholders

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Role: Data Engineer Experience: 7+ Years Mode: Hybrid Key Responsibilities: • Design and implement enterprise-grade Data Lake solutions using AWS (e.g., S3, Glue, Lake Formation). • Define data architecture patterns, best practices, and frameworks for handling large-scale data ingestion, storage, computing and processing. • Optimize cloud infrastructure for performance, scalability, and cost-effectiveness. • Develop and maintain ETL pipelines using tools such as AWS Glue or similar platforms. CI/CD Pipelines managing in DevOps. • Create and manage robust Data Warehousing solutions using technologies such as Redshift. • Ensure high data quality and integrity across all pipelines. • Design and deploy dashboards and visualizations using tools like Tableau, Power BI, or Qlik. • Collaborate with business stakeholders to define key metrics and deliver actionable insights. • Implement best practices for data encryption, secure data transfer, and role-based access control. • Lead audits and compliance certifications to maintain organizational standards. • Work closely with cross-functional teams, including Data Scientists, Analysts, and DevOps engineers. • Mentor junior team members and provide technical guidance for complex projects. • Partner with stakeholders to define and align data strategies that meet business objectives. Qualifications & Skills: • Strong experience in building Data Lakes using AWS Cloud Platforms Tech Stack. • Proficiency with AWS technologies such as S3, EC2, Glue/Lake Formation (or EMR), Quick sight, Redshift, Athena, Airflow (or) Lambda + Step Functions + Event Bridge, Data and IAM. • Expertise in AWS tools that includes Data Lake Storage, Compute, Security and Data Governance. • Advanced skills in ETL processes, SQL (like Cloud SQL, Aurora, Postgres), NoSQL DB’s (like DynamoDB, MongoDB, Cassandra) and programming languages (e.g., Python, Spark, or Scala). Real-time streaming applications preferably in Spark, Kafka, or other streaming platforms. • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, • Encryption, KMS, Secrets Manager. • Hands-on experience with Data Warehousing solutions and modern architectures like Lakehouse’s or Delta Lake. Proficiency in visualization tools such as Tableau, Power BI, or Qlik. • Strong problem-solving skills and ability to debug and optimize application applications for performance. • Strong understanding of Database/SQL for database operations and data management. • Familiarity with CI/CD pipelines and version control systems like Git. • Strong understanding of Agile methodologies and working within scrum teams. Preferred Qualifications: • Bachelor of Engineering degree in Computer Science, Information Technology, or a related field. • AWS Certified Solutions Architect – Associate (required). • Experience with Agile/Scrum methodologies and design sprints.

Posted 2 days ago

Apply

0.0 - 3.0 years

16 - 18 Lacs

Bengaluru, Karnataka

On-site

Location ; Bangalore , Electronics City As an experienced Full Stack Developer you will have opportunities to work at all levels of our technology stack, from the customer facing dashboards and back-end business logic, to the high volume data collecting and processing. As a Full Stack Developer you should be comfortable around a range of different technologies and languages, and with the integration of third-party libraries and development frameworks. Work with project stakeholders to understand requirements and ideate software solutions Design client-side and server-side architectures Build front-end applications delivering on usability and performance Build back-end services for scalability and reliability Write effective APIs and build to third-party APIs Adhere to security and data protection standards and requirements Instrument and test software to ensure the highest quality Monitor, troubleshoot, debug and upgrade production systems Write technical documentation REQUIREMENTS Proven experience as a Full Stack Developer or similar role Comfortable with Golang, Scala, Python, and Kafka, or the desire to learn these technologies Experience in front-end web development helping to create customer facing user interfaces; experience with ReactJS a plus Familiarity with databases and data warehousing such as PostgreSQL, MongoDB, Snowflake Familiarity with Amazon Web Services cloud platform Attention to detail, strong organizational skills, and a desire to be part of a team Degree in Computer Science, Engineering, or relevant field Job Types: Full-time, Permanent Pay: ₹1,600,000.00 - ₹1,800,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Electronic city Bangalore are you ok to work in this location Python backend & React JS must Experience: Full-stack development: 3 years (Required) Location: Bangalore, Karnataka (Required) Willingness to travel: 100% (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Job Title : MLOPs Engineer NP- Max 30 days Location: Gurgaon/ Bangalore/Noida/Pune Job Description: 5+ Years of prior experience in Data Engineering and MLOPs. 3+ Years of strong exposure in deploying and managing data science pipelines in production environments. Strong proficiency in Python programming language. Experience with Spark/PySpark and distributed computing frameworks. Hands-on experience with CI/CD pipelines and automation tools. Exposure in deploying a use case in production leveraging Generative AI involving prompt engineering and RAG Framework Familiarity with Kafka or similar messaging systems. Strong problem-solving skills and the ability to iterate and experiment to optimize AI model behavior. Excellent problem-solving skills and attention to detail. Ability to communicate effectively with diverse clients/stakeholders. Education Background: Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Join us as a Solution Architect This is an opportunity for an experienced Solution Architect to help us define the high-level technical architecture and design for a key data analytics and insights platform that powers the personalised customer engagement initiatives of the business You’ll define and communicate a shared technical and architectural vision of end-to-end designs that may span multiple platforms and domains Take on this exciting new challenge and hone your technical capabilities while advancing your career and building your network across the bank We're offering this role at vice president level What you'll do We’ll look to you to influence and promote the collaboration across platform and domain teams on the solution delivery. Partnering with platform and domain teams, you’ll elaborate the solution and its interfaces, validating technology assumptions, evaluating implementation alternatives, and creating the continuous delivery pipeline. You’ll also provide analysis of options and deliver end-to-end solution designs using the relevant building blocks, as well as producing designs for features that allow frequent incremental delivery of customer value. On Top Of This, You’ll Be Owning the technical design and architecture development that aligns with bank-wide enterprise architecture principles, security standards, and regulatory requirements Participating in activities to shape requirements, validating designs and prototypes to deliver change that aligns with the target architecture Promoting adaptive design practices to drive collaboration of feature teams around a common technical vision using continuous feedback Making recommendations of potential impacts to existing and prospective customers of the latest technology and customer trends Engaging with the wider architecture community within the bank to ensure alignment with enterprise standards Presenting solutions to governance boards and design review forums to secure approvals Maintaining up-to-date architectural documentation to support audits and risk assessment The skills you'll need As a Solution Architect, you’ll bring expert knowledge of application architecture, and in business data or infrastructure architecture with working knowledge of industry architecture frameworks such as TOGAF or ArchiMate. You’ll also need an understanding of Agile and contemporary methodologies with experience of working in Agile teams. A certification in cloud solutions like AWS Solution Architect is desirable while an awareness of agentic AI based application architectures using LLMs like OpenAI and agentic frameworks like LangGraph, CrewAI will be advantageous. Furthermore, You’ll Need Strong experience in solution design, enterprise architecture patterns, and cloud-native applications including the ability to produce multiple views to highlight different architectural concerns A familiarity with understanding big data processing in the banking industry Hands-on experience in AWS services, including but not limited to S3, Lambda, EMR, DynamoDB and API Gateway An understanding of big data processing using frameworks or platforms like Spark, EMR, Kafka, Apache Flink or similar Knowledge of real-time data processing, event-driven architectures, and microservices Conceptual understanding of data modelling and analytics, machine learning or deep-learning models The ability to communicate complex technical concepts clearly to peers and leadership level colleagues

Posted 2 days ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

About the Role: We seek a Engineering Manager to join SMC engineering organization. As a hands-on people leader, you will own the charter of many backend web applications that drive multi-million dollar revenue for Smc. Your design, architecture, and people management expertise will help us scale our technology that powers industry-defining mobile applications, catering to millions of trading lovers Globally. Our EMs and Sr EMs work directly with product managers and business leaders with minimal hierarchical overhead to understand key business goals, design the technology strategy, and take accountability for moving key business metrics. They are also responsible for driving technical innovations and agile methodologies without losing sight of the big picture. The ideal candidate will have consistent growth in software engineering roles in consumer Internet or SaaS companies, with increasing ownership in software delivery and people management year on year. Opportunities we offer: To develop products that will disrupt the Fintech market in India and internationally. To build, lead, and develop top technical talent in engineering. To learn scalable software development practices and technologies from proven technology experts. What We Look For: 10+ years of experience in software development and 4+ years in engineering management leading teams of 10 or more backend or full-stack engineers. 7+ years of experience developing consumer-facing or SaaS applications on Amazon Web Services, Microsoft Azure, or Google Cloud. Several years of previous experience as a Technical Lead, Staff, or Principal Engineer developing web services or web applications in NodeJs, Python, Go, React, Next or Java. Excellent knowledge of microservices architecture, distributed design patterns, and a proven track record of architecting highly scalable and fault-tolerant web applications catering to millions of end users. Sound understanding of SQL databases like MySQL or PostgreSQL and NoSQL databases like Cassandra and MongoDB. Extensive usage of message brokers, caching, and search technologies like Kafka/RabbitMQ, Redis/Memcached, or Elasticsearch. Experience running containerized workloads on Kubernetes or OpenShift. Strong understanding of computer science concepts, data structures, and algorithms. Excellent communication skills and a strong inclination towards people growth, team development, and a growth mindset required to build high-performance engineering teams. Bonus points for: Experience in working with Fintech/Start-up culture Sound knowledge of application security. Extensive experience using Observability, Telemetry, and Cloud Security tools like ELK stack, Datadog, Dynatrace, Prometheus, Snyk, etc

Posted 2 days ago

Apply

5.0 years

18 - 25 Lacs

Hyderabad, Telangana, India

On-site

Role: Senior .NET Engineer Experience: 5-12 Years Location: Hyderabad This is a WFO (Work from Office) role. Mandatory Skills: Dot Net Core, C#, Kafka, CI/CD pipelines, Observability tools, Orchestration tools, Cloud Microservices Interview Process First round - Online test Second round - Virtual technical discussion Manager/HR round - Virtual discussion Required Qualification Company Overview It is a globally recognized leader in the fintech industry, delivering cutting-edge trading solutions for professional traders worldwide. With over 15 years of excellence, a robust international presence, and a team of over 300+ skilled professionals, we continually push the boundaries of technology to remain at the forefront of financial innovation. Committed to fostering a collaborative and dynamic environment, our prioritizes technical excellence, innovation, and continuous growth for our team. Join our agile-based team to contribute to the development of advanced trading platforms in a rapidly evolving industry. Position Overview We are seeking a highly skilled Senior .NET Engineer to play a pivotal role in the design, development, and optimization of highly scalable and performant domain-driven microservices for our real-time trading applications. This role demands advanced expertise in multi-threaded environments, asynchronous programming, and modern software design patterns such as Clean Architecture and Vertical Slice Architecture. As part of an Agile Squad, you will collaborate with cross-functional teams to deliver robust, secure, and efficient systems, adhering to the highest standards of quality, performance, and reliability. This position is ideal for engineers who excel in building low-latency, high-concurrency systems and have a passion for advancing fintech solutions. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Required Expertise- Technical Expertise And Skills 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Beneficial Skills Familiarity with Saga patterns for managing distributed transactions. Experience in trading or financial systems, particularly with low-latency, high-concurrency environments. Advanced database optimization skills for relational databases such as SQL Server. Certifications And Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Relevant certifications in software development, system architecture, or AWS technologies are advantageous. Why Join? Exceptional team building and corporate celebrations Be part of a high-growth, fast-paced fintech environment. Flexible working arrangements and supportive culture. Opportunities to lead innovation in the online trading space. Skills: observability tools,docker,git,grafana,dot net core,agile methodologies,cqrs,asynchronous programming,event-driven systems,.net core,ci/cd pipeline,kubernetes,tdd,open telemetry,vertical slice architecture,elastic (kibana),cloud microservices,orchestration tools,containerization using docker,event-driven architectures,c#,aws sqs,clean architecture,sql server,kafka,prometheus,scrum practices,test-driven development (tdd),multi-threaded programming,.net,event-driven architecture,ci/cd pipelines

Posted 2 days ago

Apply

7.0 years

0 Lacs

Gurgaon Rural, Haryana, India

On-site

Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow.

Posted 2 days ago

Apply

5.0 - 7.0 years

25 - 28 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data

Posted 2 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 2 days ago

Apply

0.0 - 10.0 years

20 - 40 Lacs

Kochi, Kerala

On-site

An exciting opportunity to join an established UK based company with 20% year on year growth rate. A rapidly growing UK based software (SaaS) company dedicated to providing cutting-edge solutions for the logistics and transportation industry. With ongoing investment in new products, we offer the excitement and innovation of a start-up coupled with the stability and benefits of an established business. Knowledge, Skills and Experience Required: Able to communicate clearly and accurately on technical topics in English (verbal and written) Can write performant, testable, and maintainable JAVA code with 6+ years of proven commercial JAVA experience. Knowledge of best practice and patterns across the implementation, build and deployment of JAVA services. Proven extensive experience of Java ecosystem and related technologies and frameworks. o Spring Boot, Spring libraries and frameworks. o Hibernate o Maven Fluent in TDD and familiar with BDD Knowledge of Git, JIRA, Confluence, Maven, Docker and using Jenkins Solid experience of working with RESTful services in microservices oriented architectures Solid knowledge of working within a cloud-based infrastructure, ideally AWS Knowledge of NoSQL and relational database management systems, especially PostgreSQL Experience of building services within event or stream-based systems using either SQS, Kafka or Pulsar, CQRS Thorough understanding of Computer Science fundamentals and software patterns Nice to have: Experience with AWS Services such as Lambda, SQS, S3, Rekognition Face Liveness Experience with Camunda BPMN Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹4,000,000.00 per year Location Type: In-person Schedule: Day shift Evening shift Monday to Friday Morning shift Ability to commute/relocate: Ernakulam, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Total : 10 years (Required) Java: 10 years (Required) Work Location: In person Expected Start Date: 30/08/2025

Posted 2 days ago

Apply

5.0 years

20 - 25 Lacs

Pune, Maharashtra, India

On-site

Senior DevOps Engineer 27087 Location: Pune (Hybrid – 3 Days Onsite) Shift: US Time Zones (EST/CST) Joining: Immediate joiners only Employment Type: Full-Time About The Opportunity A leading digital innovation and product consultancy is seeking a Senior DevOps Engineer to join a high-performing engineering team. This is a hybrid opportunity designed for professionals who are passionate about infrastructure automation, CI/CD, cloud platforms, and also possess experience in backend Java development. This role is ideal for someone who thrives in fast-paced environments, has a strong DevOps foundation, and enjoys mentoring team members while working on impactful, modern cloud-native projects. Key Responsibilities Design and maintain robust CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab, or Azure DevOps Architect scalable, secure, and cost-effective cloud infrastructure (AWS, Azure, or GCP) using Infrastructure-as-Code (Terraform, Pulumi) Manage and deploy containerized applications with Docker, Kubernetes, and Helm Collaborate with backend teams to contribute to Java-based microservices architecture (Spring Boot, Kafka, REST APIs) Implement observability and monitoring solutions (Prometheus, Grafana, EFK stack) Lead DevOps best practices and mentor junior team members Participate in Agile development processes and global technical discussions Required Qualifications Bachelor’s or Master’s degree in Computer Science or related field 5+ years of hands-on experience in DevOps engineering 3+ years of experience with CI/CD tools 3+ years of experience with containerization and orchestration (Docker, Kubernetes, Helm) Strong experience in any major cloud provider (AWS, Azure, or GCP) Proficiency with Infrastructure-as-Code tools (Terraform or Pulumi) Fluency in Linux command-line and systems administration Strong scripting skills in Bash, PowerShell, TypeScript, or Python Java development experience using Java 8 or 17+, Spring Boot, Kafka, and Microservices Experience mentoring or guiding junior engineers Strong communication and analytical skills Preferred Qualifications Experience working in distributed global teams Agile/Scrum development environment familiarity Demonstrated leadership or consulting mindset Additional Details Location: Pune-based candidates only Work Model: Hybrid (3 days onsite per week) Office Address: Pune Nagar Road, Yerwada, Pune, Maharashtra – 411006 Shift: Aligned to US time zones (EST/CST) Notice Period: Only immediate joiners will be considered Relocation: Not applicable Candidate Profile: Preference for stable career history with no significant employment gaps Skills: efk stack,rest apis,python,gitlab,jenkins,infrastructure automation,linux,gcp,devops,grafana,ci/cd,infrastructure-as-code,azure devops,github actions,typescript,kubernetes,docker,kafka,prometheus,terraform,spring boot,java,bash,aws,helm,cloud,infrastructure,backend java development,pulumi,azure,powershell,cloud platforms

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Caprae Capital Partners is an innovative private equity firm led by the principal Kevin Hong who has been a serial tech entrepreneur, and who grew two startups to $31M ARR and $7M in revenue. The fund originated with two additional tech entrepreneur friends of Kevin who have had ~8 figure and ~9 figure exits to Twitter and Square, respectively. Additional partners include an Ex-Nasa software engineer and an Ex-Chief of Staff from Google. Caprae Capital in conjunction with its portfolio company launched AI-RaaS (AI Readiness as a Service) and is looking for teammates to join for the long haul If you have a passion for disrupting the finance industry and happen to be a mission-driven person, this is a great fit for you. Additionally, given the recent expansion of this particular firm, you will have the opportunity to work from the ground level and take on a leadership role for the internship program which would result in a paid role. Lastly, this is also a great role for those who are looking into strategy and consulting roles in the future as it will give you the exposure and experience necessary to develop strong business acumen. Role Overview We are looking for a Lead Full Stack Developer to architect and lead the development of new features for SaaSquatchLeads.com, an AI-driven lead generation and sales intelligence platform. You will own technical direction, guide other engineers, and ensure our stack is scalable, maintainable, and optimized for AI-powered workloads. Key Responsibilities Lead architectural design and technical strategy for SaaSquatchLeads.com. Develop, deploy, and maintain end-to-end features spanning frontend, backend, and AI integrations. Implement and optimize AI-driven services for lead scoring, personalization, and predictive analytics. Build and maintain data pipelines for ingesting, processing, and analyzing large datasets. Mentor and guide a distributed engineering team, setting best coding practices . Collaborate with product, design, and data science teams to align technical execution with business goals. Ensure security, performance, and scalability of the platform. Required Skills & Technologies Frontend: React, JavaScript (ES6+), TypeScript, Redux/Zustand, HTML, CSS, TailwindCSS. Backend: Python (Flask, FastAPI, Django), Node.js (bonus). AI & Data Science: Python, PyTorch, Hugging Face, OpenAI APIs, LangChain, Pandas, NumPy. Databases: PostgreSQL, MySQL, MongoDB, Redis. DevOps & Infrastructure: Docker, Kubernetes, AWS (Lambda, S3, RDS, EC2), CI/CD pipelines. Data Processing: ETL tools, message queues (Kafka, RabbitMQ). Search & Indexing: Elasticsearch, Meilisearch (for fast lead lookups).

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Location : Remote Experience : 4-6 years Position : Gen-AI Developer (Hands-on) Technical Requirements: Hands-on Data Science , Agentic AI, AI/Gen AI / ML /NLP Azure services (App Services, Containers, AI Foundry, AI Search, Bot Services) Experience in C# Semantic Kernel Strong background in working with LLMs and building Gen AI applications AI agent concepts .NET Aspire End-to-end environment setup for ML/LLM/Agentic AI (Dev/Prod/Test) Machine Learning & LLM deployment and development Model training, fine-tuning, and deployment Kubernetes, Docker, Serverless architecture Infrastructure as Code (Terraform, Azure Resource Manager) Performance Optimization & Cost Management Cloud cost management & resource optimization, auto-scaling Cost efficiency strategies for cloud resources MLOps frameworks (Kubeflow, MLflow, TFX) Large language model fine-tuning and optimization Data pipelines (Apache Airflow, Kafka, Azure Data Factory) Data storage (SQL/NoSQL, Data Lakes, Data Warehouses) Data processing and ETL workflows Cloud security practices (VPCs, firewalls, IAM) Secure cloud architecture and data privacy CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) Automated testing and deployment for ML models Agile methodologies (Scrum, Kanban) Cross-functional team collaboration and sprint management Experience with model fine-tuning and infrastructure setup for local LLMs Custom model training and deployment pipeline design Good communication skills (written and oral)

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM

Posted 2 days ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Job Information Date Opened 08/01/2025 Job Type Full time Industry IT Services City Pune City State/Province Maharashtra Country India Zip/Postal Code 411001 About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description BACKEND ENGINEER Understanding of Spring AOP, Microservices architecture design and implementation Basic understanding of Microservices Design Pattern such as Circuit Breaker etc Experience with event driven frameworks such as Kafka, RabbitMQ, or IBM MQ Ability to implement container-based APIs using container frameworks like OpenShift, Docker, or Kubernetes. Working experience with Gradle, GIT, GitHub, GitLab, etc. around continuous integration and continuous delivery infrastructure Requirements Requirements Experience of- 5+ years in REST frameworks with focus on API development with Spring Boot. 3+ years in Microservice Architecture based applications. Good Experience in AGILE methodology (Scrum, Lean, SAFE, etc.) 2+ years’ experience integrating with backend services like Kafka, Event Hub , Rabbit MQ , AWS SQS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC. Technology Stack Java//J2EE, Spring, Spring Boot, Micro Services, Kafka, OpenShift, Docker, Kubernetes RDBMS databases like Oracle, MS SQL Server, AWS, RDS, Gitlab Benefits Benefits Standard Company Benefits

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana

On-site

Associate Manager, Solution Engineering- Hyderabad, Telangana Are you ready to join a team in a global company where your primary focus is to deliver services and products to our customers, and provide thought leadership on creating solutions? Are you interested in joining a globally diverse organization where our unique contributions are recognized and celebrated, allowing each of us to thrive? Then it’s time to join Western Union as an Associate Manager, Solution Engineering. Western Union powers your pursuit The role of Associate Manager - Technology Operations will own end-to-end governance for solution and services delivery of Middleware Operations and Engineering for applications portfolio: On-premises and on the Cloud. The role is expected to contribute to formulating, developing and executing Middleware Technology Strategy, Planning & Governance. Work closely with the Application Engineering, Operations / Production Support, and Product teams, own the solution from a technical perspective, engage with internal customers in successful completion of assigned projects on time. You should possess hands on experience in working on various Middleware tools / technologies on AWS and On-premises infrastructure. Experience in design through deployment of middleware solutions and production support with performance monitoring, tuning and root cause analysis is critical. Role Responsibilities To maintain and manage different Middleware Technologies such as Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP on On-prem or AWS Cloud infrastructure. Design and implement DR solutions. Collaborate with architecture, engineering, support, teams in designing, and deploying various application solutions. Ability to handle multiple projects and deadlines in a fast-paced environment independently. Advanced troubleshooting skills: Application performance tuning, issue resolutions. Good communication and Interpersonal skills, with the ability to collaborate effectively with cross functional teams. Good delivery exposure starting from configuration, development and deployment Experience with Agile/Scrum Technologies. Proven ability to manage multiple projects simultaneously and prioritize tasks effectively. Design and execute upgrades and migrations including OnPrem and in Cloud. Define and manage best practices around Application security and help to ensure security and compliance across all application systems. Providing on-call support for production systems. Continuous improvement and automation as much as possible. Communicate clearly and regularly with project teams and management. Mentor team and build cloud knowledge within the team and drive for team success. Role Requirements Minimum of 7+ years in experience in working on different middleware technologies (Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP) Developing knowledge in Hawk rules, Grafana Prometheus. Familiar with IT Service Management tools like ServiceNow Experience with Splunk AppD, Zenoss, AWS CloudWatch, CICD tools. Strong Windows, AIX/Unix/Linus administration skills. Expert level JVM dump reading, end to end trouble shooting skills. Experience in using cloud native technologies to build applications. Strong understanding of Serverless Computing. DevOps exposure and knowledge of one or more tools such as Chef, Puppet, Jenkins, Ansible, Python. Working experience on different flavors of OS (Unix/Linux/Solaris/Windows) Must Have: L3 MW – TIBCO Install, patch, monitor, diagnose, performance tune Tibco software Provide 24x7 third-level support. Provide SME advice on architecture, design, and implementation of new projects and deliverables to applications. Follow best practice implementation process – checklist, pre/post implementation validation and checkout, exercised backout procedures for platform stability and high success rate. Analyze and performance tune the Middleware platforms Tibco, Kafka. Develop and manage Hawk rules, Confluent monitoring, Grafana Prometheus. Devise automation and autonomics trigger and scripts for best performance, self-tuning, and outage avoidance. Work with the vendor in root cause analysis, managing tickets, collecting doc, devising workarounds, taking corrective or preventative actions, and implementing vendor fixes. Abide by all requirements from Security, Compliance, and Audit. Keep all platforms at supported software levels and apply CIS security patching. Experience with Jenkins, GIT, AWS, Zenoss, Splunk, AppD, Dynatrace. Tibco Suite Administration (BW5, BW6, Active Spaces, EMS, BE). Developing Hawk rules, Grafana Prometheus, Confluent Monitoring. Good Knowledge on ServiceNow, or other ticketing management tools, Jira. Strong Windows, AIX/Unix/Linux administration skills and AWS tools. Diagnostic and root cause determination skills. Knowledge of JVM, Thread, Core dump reading. Cloud Technologies (Azure/AWS) – desired Migration experience Tibco BE, BW, EMS, Active Spaces, Kafka latest versions. Experience with JAVA, Shell/Python Scripting, RDBMS, SQL. Experience with web services, certificates, SOAP, API management Ability to multitask and prioritize responsibilities daily, often under pressure. Excellent written and oral communication skills. Good knowledge on Windows, AIX/Unix/Linux, AWS Proficiency in JAVA, Shell/Python Scripting, RDBMS, SQL Good to Have: L2 MW – Tomcat/WebSphere/Jboss/MS IIS Experience in administering tomcat, WebSphere, Jboss on Linux and MS IIS on windows. SSL Certification and its management. Experience in Application performance tuning. Experience in applying patches on middleware application servers. Experience in DevOps tools like Spinnaker, Ansible, Cloud bees. Housekeeping and Incident Resolution. Proficiency in scripting languages like Bash or Groovy for automation tasks. Familiarity with various operating systems like Linux, Windows Server and Unix. Good understanding of networking concepts and protocols (TCP/IP, DNS, HTTP/HTTPS) to configure network settings and troubleshoot connectivity issues and optimize middleware communication. Understanding backup and recovery strategies for middleware environments, including regular backups of configuration data, application artifacts, and system files to ensure data integrity and DR readiness. Application security. App and Web tier management. App Server Products - Tomcat, WebSphere, Jboss, MS IIS Web Server Products – Apache http, MS IIS ITIL certified – Added advantage. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for the Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India specific benefits include: Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Cab Facility Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. #LI-RP #LI-Hybrid Estimated Job Posting End Date: 08-05-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 2 days ago

Apply

0.0 - 9.0 years

0 Lacs

Hyderabad, Telangana

On-site

General information Country India State Telangana City Hyderabad Job ID 45479 Department Development Description & Requirements Senior Java Developer is responsible for architecting and developing advanced Java solutions. This role involves leading the design and implementation of microservice architectures with Spring Boot, optimizing services for performance and scalability, and ensuring code quality. The Senior Developer will also mentor junior developers and collaborate closely with cross-functional teams to deliver comprehensive technical solutions. Essential Duties: Lead the development of scalable, robust, and secure Java components and services. Architect and optimize microservice solutions using Spring Boot. Translate customer requirements into comprehensive technical solutions. Conduct code reviews and maintain high code quality standards. Optimize and scale microservices for performance and reliability. Collaborate effectively with cross-functional teams to innovate and develop solutions. Experience in leading projects and mentoring engineers in best practices and innovative solutions. Coordinate with customer and client-facing teams for effective solution delivery. Basic Qualifications: Bachelor’s degree in Computer Science or a related field. 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Preferred Qualifications Experience with Spark using Spring Boot. Familiarity with the C4 Software Architecture Model. Experience using tools like Lucidchart for architecture and flow diagrams. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies