Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
170.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Strategy Develop the strategic direction and roadmap for SCPAY, aligning with Business Strategy, ITO Strategy and investment priorities. Tap into latest industry trends, innovative products & solutions to deliver effective and faster product capabilities Support CASH Management Operations leveraging technology to streamline processes, enhance productivity, reduce risk and improve controls Business Work hand in hand with Payments Business, taking product programs from investment decisions into design, specifications, solutioning, development, implementation and hand-over to operations, securing support and collaboration from other SCB teams Ensure delivery to business meeting time, cost and high quality constraints Support respective businesses in growing Return on investment, commercialisation of capabilities, bid teams, monitoring of usage, improving client experience, enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Processes Responsible for the end-to-end deliveries of the technology portfolio comprising key business product areas such as Payments & Clearing. Own technology delivery of projects and programs across global SCB markets that a) develop/enhance core product capabilities b) ensure compliance to Regulatory mandates c) support operational improvements, process efficiencies and zero touch agenda d) build payments platform to align with latest technology & architecture trends, improved stability and scale Key Responsibilities People & Talent Employee, engage and retain high quality talent to ensure Payments Technology team is adequately staffed and skilled to deliver on business commitments Lead through example and build appropriate culture and values. Set appropriate tone and expectations for the team and work in collaboration with risk and control partners. Bridge skill / capability gaps through learning and development Ensure role, job descriptions and expectations are clearly set and periodic feedback provided to the entire team Ensure the optimal blend and balance of in-house and vendor resources Risk Management Be proactive in ensuring regular assurance that the Payments ITO Team is performing to acceptable risk levels and control standards Act quickly and decisively when any risk and control weakness becomes apparent and ensure those are addressed within quick / prescribed timeframes and escalated through the relevant committees Balance business delivery on time, quality and cost constraints with risks & controls to ensure that they do not materially threaten the Group’s ability to remain within acceptable risk levels Ensure business continuity and disaster recovery planning for the entire technology portfolio Governance Promote an environment where compliance with internal control functions and the external regulatory framework Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board of [insert name of entities] Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Solution Architect – SCPAY SCPAY – Programme Managers Group Payments Product Development Heads Group Cash Operations Governance Promote an environment where compliance with internal control functions and the external regulatory framework Skills And Experience Java / Spring Boot Kafka Streams, REST, JSON Design Principle Hazelcast & ELK Oracle & Postgres Qualifications Minimum 10 yrs of experience in the Dev role and in that a couple of years of experience as Dev lead role is an added advantage, good knowledge in Java, Microservices and Spring boot Technical Knowledge: Java / Spring Boot, Kafka Streams, REST, JSON, Netflix Micro Services suite ( Zuul / Eureka / Hystrix etc., ), 12 Factor Apps, Oracle, PostgresSQL, Cassandra & ELK Ability to work with geographically dispersed and highly varied stakeholders Very Good communication and interpersonal skills to manage senior stakeholders and top management Knowledge on JIRA and Confluence tools are desired About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Java Full stack Experience: 7+ yrs Location Option: Chennai / Pune - WFO All 5 days Tech Stack Required: Java (Version 8+), Spring Boot, Microservices with any Database and Any Messaging queue preferably Kafka AND Angular 2+ Versions (Not Angular JS). Java Full Stack: We are looking for Full Stack Developers who can continue to develop and enhance our platform to meet our client needs. In this role, you will work with our stakeholders, senior engineers and Product team to understand business requirements, architect technology solutions to solve the problems, and build out the solutions. Our Office platform current tech stack includes Java, Angular, Spring boot ,Docker, Ruby, Rails Application framework We use Postgres/Oracle as our RDBMS and IBM MQ/Kafka for messaging. Requirements : A Bachelor’s degree in computer science, Engineering, or a related discipline with 5+ years of work experience. Strong fundamentals in Data Structure, Algorithms, and Object-Oriented Design. Proficiency in Java 17 or higher and Front-End UI Technologies . Strong Experience in Spring Framework, Hibernate and proficiency with Spring Boot Experience in Angular 11 or higher, JavaScript frameworks, CSS, HTML. Experience & Good Understanding of Messaging frameworks like IBM MQ /Kafka Experience in Test driven and Behavior driven development Experience with Agile Software development methodologies, tools and processes Knowledge of Architectural patterns including Microservices architecture Knowledge of Securities or Financial Services Domain is a plus Job responsibilities: Work within a scrum team of 8+ people highly focused on service delivery, resiliency and interoperability with other services in the middle office platform. Consult and collaborate with other technologists to leverage and contribute to reusable code and services. Develop subject matter expertise in one or more functional areas Drive the design of scalable, high performing and robust applications and represent the software in design/code reviews with senior staff. Help the tech leadership team shape best practices for developing, sharing and continuously improving our software platform. Apply: shruthi@letzbizz.com Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Delhi, India
On-site
Responsibilities Design, build, and maintain scalable data pipelines and streaming systems. Develop real-time WebSocket and API integrations for data ingestion and delivery. Manage and optimize relational and non-relational databases for performance, reliability, and scalability. Collaborate with AI and product teams to build quantitative models and serve them in production environments. Design, build, and maintain robust ETL/ELT pipelines for ingesting and transforming on-chain and off-chain data. Build a real-time data streaming infrastructure using tools like Kafka or equivalent. Architect and optimize relational and non-relational databases to support complex queries and financial data models. Collaborate with product and analytics teams to design and deploy quantitative models for TVL, yield tracking, protocol metrics, etc. Implement tools and practices to ensure data integrity, quality, and observability across the platform. Contribute to our indexing infrastructure, working with smart contract data, subgraphs, or custom indexers. Requirements 3+ years of experience in data engineering, backend systems, or infrastructure roles. Strong knowledge of databases (SQL, NoSQL) and experience with data modeling and schema design. Proficient with PostgreSQL, TimescaleDB, or other time-series/analytical databases. Hands-on experience with stream processing frameworks (Kafka, Flink, etc. ). Expertise in building and consuming RESTful APIs and WebSocket protocols. Familiarity with blockchain data or financial data. Strong programming skills in Python or Go. Experience with quantitative finance modeling, DeFi metrics, or financial KPIs is a strong plus. Solid understanding of cloud infrastructure (e. g., AWS, GCP, or similar). Nice To Have Experience with subgraphs, The Graph, or building custom blockchain indexers. Background in data visualization platforms or interactive dashboards. Knowledge of DeFi protocols, tokenomics, and governance systems. Prior experience working in a fast-paced startup or early-stage product environment. This job was posted by Utsav Agarwal from Sharpe Labs. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a talented Full Stack Developer experienced in Java, Kotlin, Spring Boot, Angular, and Apache Kafka to join our dynamic engineering team. The ideal candidate will design, develop, and maintain end-to-end web applications and real-time data processing solutions, leveraging modern frameworks and event-driven architectures. Location : Noida/Pune/Bangalore/Hyderabad/Chennai Timings : 2pm to 11pm Experience : 4-6 Years Key Responsibilities Design, develop, and maintain scalable web applications using Java, Kotlin, Spring Boot, and Angular. Build and integrate RESTful APIs and microservices to connect frontend and backend components. Develop and maintain real-time data pipelines and event-driven features using Apache Kafka. Collaborate with cross-functional teams (UI/UX, QA, DevOps, Product) to define, design, and deliver new features. Write clean, efficient, and well-documented code following industry best practices and coding standards. Participate in code reviews, provide constructive feedback, and ensure code quality and consistency. Troubleshoot and resolve application issues, bugs, and performance bottlenecks in a timely manner. Optimize applications for maximum speed, scalability, and security. Stay updated with the latest industry trends, tools, and technologies, and proactively suggest improvements. Participate in Agile/Scrum ceremonies and contribute to continuous integration and delivery pipelines. Required Qualifications Experience with cloud-based technologies and deployment (Azure, GCP). Familiarity with containerization (Docker, Kubernetes) and microservices architecture. Proven experience as a Full Stack Developer with hands-on expertise in Java, Kotlin, Spring Boot, and Angular (Angular 2+). Strong understanding of object-oriented and functional programming principles. Experience designing and implementing RESTful APIs and integrating them with frontend applications. Proficiency in building event-driven and streaming applications using Apache Kafka. Experience with database systems (SQL/NoSQL), ORM frameworks (e.g., Hibernate, JPA), and SQL. Familiarity with version control systems (Git) and CI/CD pipelines. Good understanding of HTML5, CSS3, JavaScript, and TypeScript. Experience with Agile development methodologies and working collaboratively in a team environment. Excellent problem-solving, analytical, and communication skills. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Chief Technology Officer (CTO) Role Overview: We are seeking a visionary Chief Technology Officer to lead our technology function and drive the development of innovative AdTech solutions. In this leadership role, you will define and implement the company's technical strategy while overseeing engineering, data science, and product technology teams. Your focus will be on building scalable, high-performance platforms including RTB, DSP, and SSP systems. Key Responsibilities: Develop and execute a forward-looking technology roadmap aligned with business goals. Lead cross-functional teams in engineering and product development. Architect and manage real-time bidding systems, data infrastructure, and platform scalability. Drive innovation in AI/ML, big data, and real-time analytics. Ensure system reliability, security, DevOps, and data privacy best practices. Collaborate with leadership to deliver impactful tech-driven products. Represent the company in technical partnerships and industry events. Requirements: 10+ years in software engineering, with 5+ in a leadership role. Strong background in AdTech (RTB, DSP, SSP, OpenRTB). Expertise in AI/ML, cloud (AWS/GCP), and big data (Kafka, Spark, Hadoop). Proven experience in building scalable backend systems and leading high-performing teams. Bachelor’s or Master’s in Computer Science or Engineering; MBA/PhD is a plus. Show more Show less
Posted 1 day ago
6.0 - 12.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title: Node.js Developer Years of Experience: 6 to 12 years Notice period: Immediate to 30 Days Location: Bangalore, Chennai, Dubai Work Mode: WFO About Us: We prioritize our employees, fostering a collaborative and inclusive culture. Our mission is to empower our team while delivering exceptional solutions that enhance business performance and user experiences. GenAI Product Development | Digital Technology Solutions | ValueLabs - ValueLabs Key Responsibilities: Design and develop scalable, high-performance Node.js applications Develop and deploy RESTful APIs using Node.js, Express.js, and related technologies Collaborate with cross-functional teams to identify business requirements and develop solutions Troubleshoot and resolve technical issues related to Node.js applications Stay up-to-date with the latest Node.js technologies and best practices Required: Minimum 5 years of coding experience in NodeJS, JavaScript and Databases. At least 1 year hands-on in TypeScript . Hands on experience in performance tuning, debugging, monitoring Technical Skills: Excellent knowledge developing scalable and highly-available Restful APIs using NodeJS technologies Practical experience with GraphQL. Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token based authentication (Rest, JWT, oAuth) Possess expert knowledge of task/message queues include but not limited to: AWS, Microsoft Azure, Pushpin and Kafka Thanks, Monica P Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less
Posted 1 day ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Requirements Minimum of 2-3 years of FullStack Software Development experience in building large-scale mission-critical applications. Strong foundation in computer science, with strong competencies in data structures, algorithms, and software design optimized for building highly distributed and parallelized systems. Proficiency in one or more programming languages - Java and Python. Strong hands-on experience in MEAN, MERN, Core Java, J2EE technologies, Microservices, Spring, Hibernate, SQL, REST APIs. Experience in web development using one of the technologies, like Angular or React, etc. Experience with one or more of the following database technologies: SQL Server, Postgres, MySQL, and NoSQL such as HBase, MongoDB, and DynamoDB. Strong problem-solving skills to deep dive, brainstorm, and choose the best solution approach. Experience with AWS Services like EKS, ECS, S3 EC2 RDS, Redshift, and Github/Stash, CI/CD Pipelines, Maven, Jenkins, Security Tools, Kubernetes/VMs/Linux, Monitoring, Alerting, etc. Experience in Agile development is a big plus. Excellent presentation, collaboration, and communication skills required. Result-oriented and experienced in leading broad initiatives and teams. Knowledge of Big Data technologies like Hadoop and Hive, Spark, Kafka, etc. would be an added advantage. Bachelor's or Master's degree in mathematics, Computer Science. 1-4 years of experience as a FullStackEngineer. Proven analytic skills and designing scalable applications. This job was posted by Vivek Chhikara from Protium. Show more Show less
Posted 1 day ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled and experienced Application Architect with a strong background in designing and architecting both user interfaces and backend Java microservices, with significant exposure to Amazon Web Services (AWS). As an Application Architect, you will be responsible for defining the architectural vision, ensuring scalability, performance, and maintainability of our applications. You will collaborate closely with engineering teams, product managers, and other stakeholders to deliver robust and innovative solutions. Responsibilities Architectural Design and Vision: Define and communicate the architectural vision and principles for both frontend and backend systems. Design scalable, resilient, and secure application architectures leveraging Java microservices and cloud-native patterns on AWS. Develop and maintain architectural blueprints, guidelines, and standards. Evaluate and recommend technologies, frameworks, and tools for both UI and backend development. Ensure alignment of architectural decisions with business goals and technical strategy. UI Architecture and Development Guidance: Provide architectural guidance and best practices for developing modern and responsive user interfaces (e.g., using React, Angular, Vue.js). Define UI architecture patterns, component design principles, and state management strategies. Ensure UI performance, accessibility, and user experience considerations are integrated into the architecture. Collaborate with UI developers and designers to ensure technical feasibility and optimal implementation of UI designs. Backend Microservices Architecture and Development Guidance: Design and architect robust and scalable backend systems using Java and microservices architecture. Define API contracts, data models, and integration patterns for microservices. Ensure the security, reliability, and performance of backend services. Provide guidance to backend Java developers on best practices, coding standards, and architectural patterns. AWS Cloud Architecture and Deployment: Design and implement cloud-native solutions on AWS, leveraging services such as EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, API Gateway, etc. Define infrastructure-as-code (IaC) strategies using tools like CloudFormation or Terraform. Architect for high availability, fault tolerance, and disaster recovery on AWS. Optimize cloud costs and ensure efficient resource utilization. Implement security best practices and compliance standards within the AWS environment. Collaboration and Communication: Collaborate effectively with engineering managers, product managers, QA, DevOps, and other stakeholders. Communicate architectural decisions and trade-offs clearly and concisely to both technical and non-technical audiences. Facilitate technical discussions and resolve architectural challenges. Mentor and guide engineering teams on architectural best practices and technology adoption. Technology Evaluation and Adoption: Research and evaluate new technologies and trends in UI frameworks, Java ecosystems, and AWS services. Conduct proof-of-concepts and feasibility studies for new technologies. Define adoption strategies for new technologies within the organization. Performance and Scalability: Design systems with a focus on performance, scalability, and maintainability. Identify and address potential performance bottlenecks and scalability limitations. Define and implement monitoring and alerting strategies for applications and infrastructure. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 15 + years of experience in software development with a strong focus on Java. 5 + years of experience in designing and architecting complex applications, including both UI and backend systems. Deep understanding of microservices architecture principles and best practices. Strong expertise in Java and related frameworks (e.g., Spring Boot, Jakarta EE). Solid experience with modern UI frameworks (e.g., React, Angular, Vue.js) and their architectural patterns. Significant hands-on experience with Amazon Web Services (AWS) and its core services. Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration on AWS (ECS/EKS). Proficiency in designing and implementing RESTful APIs and other integration patterns. Understanding of database technologies (both relational and NoSQL) and their integration with microservices on AWS. Experience with infrastructure-as-code (IaC) tools like CloudFormation or Terraform. Strong understanding of security best practices for both UI and backend applications in a cloud environment. Excellent communication, presentation, and interpersonal skills. Proven ability to lead technical discussions and influence architectural decisions. Preferred Qualifications Experience with event-driven architectures and messaging systems (e.g., Kafka, SQS). Familiarity with CI/CD pipelines and DevOps practices on AWS. Experience with performance testing and optimization techniques. Knowledge of different architectural patterns (e.g., CQRS, Event Sourcing). Experience in [Mention any specific domain or industry relevant to your company]. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional). Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Design, develop, and maintain scalable and high-performance web and mobile applications. Work across the stack with React, React Native, Golang, and Node.js . Architect and optimize APIs and microservices to ensure reliability, scalability, and security. Deploy, monitor, and manage cloud infrastructure using Kubernetes and AWS. Collaborate with product managers, designers, and other engineers to build seamless user experiences. Conduct code reviews, mentor junior developers, and promote best practices in software development. Continuously improve system performance, observability, and developer productivity. Troubleshoot and resolve production issues, ensuring uptime and reliability. Requirements 5+ years of experience as a Full Stack Engineer, working on production-grade applications. Strong proficiency in React.js and React Native for front-end development. Experience with Golang and Node.js for backend development. Solid understanding of microservices architecture and API development. Experience with Kubernetes, Docker, and cloud platforms (AWS). Knowledge of databases (SQL and NoSQL) such as PostgreSQL and DynamoDB. Familiarity with CI/CD pipelines and DevOps practices. Strong problem-solving and analytical skills. Built offline-first applications. Excellent communication and teamwork abilities. Nice-to-Have Experience in the pos or payments industry. Knowledge of GraphQL and gRPC. Familiarity with event-driven architecture (Kafka, RabbitMQ, etc. ). Exposure to performance tuning and high-traffic system optimizations. This job was posted by Adarsha Kumari from Oolio. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are seeking a highly skilled and experienced Senior Developer with in-depth knowledge of TM Forum (TMF) standards and frameworks to join our growing engineering team. In this role, you will play a pivotal part in designing, developing, and implementing robust and scalable software solutions that adhere to industry best practices and accelerate our digital transformation journey. You will leverage your expertise in TMF Open APIs, Information Framework (SID), and Business Process Framework (eTOM) to ensure seamless integration, interoperability, and efficiency across our systems. If you are passionate about building high-quality software within a standardized telecom ecosystem, we encourage you to apply. Key Responsibilities: Lead the design, development, and implementation of complex software solutions, primarily focusing on systems aligned with TM Forum (TMF) standards. Translate business requirements and TMF specifications into technical designs and architectural blueprints. Apply knowledge of TMF Information Framework (SID) to model data, define common data entities, and ensure data consistency across applications. Utilize understanding of TMF Business Process Framework (eTOM) to align software solutions with standardized business processes. Participate in all phases of the software development lifecycle, including requirements gathering, design, coding, testing, deployment, and support. Collaborate effectively with product managers, architects, QA engineers, and other stakeholders to deliver high-quality solutions. Conduct code reviews, mentor junior developers, and contribute to best practices and continuous improvement initiatives. Troubleshoot and resolve complex technical issues, ensuring optimal system performance and reliability. Stay up-to-date with the latest TMF releases, industry trends, and emerging technologies. Required Skills & Experience: Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. 5+ years of experience in software development, with a strong background in building enterprise-grade applications. Demonstrable in-depth knowledge and hands-on experience with TM Forum (TMF) standards and frameworks, including: TMF Open APIs (highly preferred): Experience in designing, implementing, and consuming TMF-compliant APIs. TMF Information Framework (SID): Strong understanding of data modeling principles and experience applying SID to define data entities and relationships. TMF Business Process Framework (eTOM): Familiarity with eTOM processes and how to align software development with these standardized operations. Proficiency in at least one major programming language (e.g., Java, Python, Go, C#, Node.js). Experience with microservices architecture, cloud platforms (e.g., AWS, Azure, GCP), and containerization technologies (e.g., Docker, Kubernetes). Solid understanding of relational and/or NoSQL databases. Experience with agile development methodologies (Scrum, Kanban). Strong problem-solving, analytical, and debugging skills. Excellent communication, collaboration, and interpersonal skills. Preferred Qualifications: Experience in the telecommunications industry. TMF certifications (e.g., Open API, SID). Experience with CI/CD pipelines and DevOps practices. Familiarity with messaging queues (e.g., Kafka, RabbitMQ). Knowledge of network orchestration, service assurance, or other telecom-specific domains. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. We believe in the power of people. We are a network strategy and technology company that is motivated by making a difference in people’s lives – their productivity, creativity, health and comfort. We’re looking for a highly motivated, talented and experienced engineer who is passionate about product verification automation activities and is ready to assume a leadership position within the team in addressing future projects. You will certify solutions that provide our customers opportunities to differentiate their service offerings in a very competitive market. The ideal candidate is a flexible, highly technical problem solver, with interdisciplinary knowledge of software, and test & test automation. You feel at home in a dynamic, multi-disciplined engineering environment, acting as an interface between product design, other Blue Planet test engineering teams, and members of other functional groups (support, documentation, marketing, etc). RESPONSIBILITIES Engage with various engineering teams, product line managers and product owners to transform concepts and high-level requirements into optimized test coverage and enhanced customer experience. Automate and maintain all manually devised and executed test cases using automation best practices and maintain CI/CD pipeline framework Coding E2E Automated tests for the Angular UI frontend with Cucumber/Webdriver.io. Coding Rest API testing automation Coding of System testing with ansible, bash scripting Drive (plan and implement) lab or simulation environment setup activities to fully address proposed testing scenarios and coordinate equipment acquisition/sharing agreements with the various teams concerned. Analyse test results and prepare test reports. Investigate software defects and highlight critical issues that can have potential customer impact and consult with software development engineers in finding resolution or to address problems related to specifications and/or test plans/procedures. Raise Agile Jira bugs for product defects Report on automation status Research the best tools/ways of test automation for required functionality Skills Expected from the candidate: Frontend testing frameworks/libraries: Cucumber/Webdriver.io Backend programming/markup languages: Python Backend testing: Rest API testing automation tools, Postman/Newman, Jasmine Load testing: JMeter, Grafana + Prometheus Container management: Docker, Kubernetes, OpenStack Testing Theory: terminology, testing types, asynchronous automated testing Continuous Integration Tools: Jenkins, TeamCity, GitLab Cloud Environments: AWS, Azure, Google cloud Version control system: Git, Bitbucket System Testing Automation with: Bash, Shell, Python, Ansible scripting Hands-on experience of CI/CD pipeline configuration and maintenance Solid operational and administrator experience with Unix operation systems Understanding of Web application and Microservice solution architecture Strong abilities to rapidly learn new complex technological concepts and apply knowledge in daily activities. Excellent written (documentation) and interpersonal communication skills (English). Strong abilities to work as part of a team or independently with little supervision. Experienced working as part of an Agile scrum team and with DevOps process Desirable For The Candidate Ticketing: Jira Documentation: Confluence, Gitlab Frontend programming/markup languages: Typescript/JavaScript, html, CSS, SVG Frontend development frameworks/libraries: Angular 2+, Node.js/npm, D3.js, gulp Programming theory: algorithms and data structures, relational and graph database concepts, etc. Non-critical Extras Domain: Telecom, Computer Networking, OSS Builds: Maven, NPM, JVM, NodeJS Databases: PostgreSQL, Neo4j, ClickHouse Test Management: TestRail Other Skills: ElasticSearch, Drools, Kafka integration, REST (on Spring MVC), SSO (LDAP, Reverse Proxy, OAuth2) Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Responsibilities Design, develop, and maintain scalable backend services and APIs to support real-time gaming features, payments, etc. Design and optimize relational (e. g., PostgreSQL, MySQL) and NoSQL (e. g., MongoDB, Redis) databases for high-performance data storage and retrieval. Architect systems capable of handling millions of concurrent users with low latency and high throughput. Implement and manage secure payment gateway integrations (Razorpay, Juspay, etc. ) for deposits, withdrawals, and in-app purchases. Develop core backend modules for game logic, matchmaking, leaderboards, and scoring systems. Implement robust user authentication (e. g., OAuth, JWT) and ensure data privacy and security compliance. Develop and deploy microservices using frameworks like Spring Boot, Express.js, or Django, ensuring modularity and fault tolerance. Monitor system performance using tools like Prometheus, Grafana, or New Relic, and optimize infrastructure for reliability and cost efficiency. Work closely with product managers, frontend developers, and DevOps to ensure seamless feature rollouts and smooth user experiences. Write clean, maintainable, and testable code. Review code from peers and mentor junior engineers. Requirements Bachelor's Degree: Required, preferably in Computer Science or a related field. 3-6 years of hands-on experience with strong experience in backend development with Node.js, Java, and Golang. Proficiency in building RESTful APIs and working with frameworks like Express.js, Spring Boot, or Flask. Experience with cloud platforms (AWS, GCP and tools like Docker, Kubernetes, and CI/CD pipelines. Experience in building real-time systems using WebSockets or RPC. Experience with Kafka, RabbitMQ, or other message brokers for asynchronous processing. This job was posted by Ananya Jaiswal from HaaNaa. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate level What You'll Do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The Skills You'll Need You’ll need at least five years of experience in data sourcing including real time data integration , and a certification in AWS cloud. You’ll Also Need Experience in AWS Cloud, Airflow, and associated data migration from on premise to cloud with knowledge on databases like Snowflake, AWS Data Lake, PostgreSQL, Oracle, MongoDB and AWS DynamoDB, Experience in multiple programming languages or Low Code toolsets, Kafka and Stream sets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Designation : Deputy General Manager (DGM)/ Senior Manager (SM) Team Composition: Internal Developers + Low-Code Developers + Time & Material (T&M) Vendors Experience: 8–12 years in software engineering, with 3+ years in an engineering leadership role 🔍 Role Overview We are looking for a hands-on and outcome-oriented Engineering Manager to lead the development and delivery of our mobile, web, and low-code applications. You will manage hybrid teams (in-house + T&M vendors), decide on the optimal development approach (highcode vs. low-code using Unify Apps), and own the end-to-end technical delivery of businesscritical features and applications. 🛠 ️ Key Responsibilities Team & Delivery Leadership -Lead engineering teams delivering across mobile (React Native), web (React), backend (Node.js, Spring Boot), and low-code (Unify Apps) platforms. -Drive execution across both internal and T&M vendor teams, ensuring clarity, speed, and accountability. -Coach internal engineers and ensure vendor output meets quality and delivery benchmarks. High-Code vs Low-Code Decisioning -Evaluate requirements and decide on the right implementation path (Unify Apps vs. traditional code). -Ensure low-code is leveraged for agility, while high-code is used for complex, scalable components. -Maintain architectural alignment across both delivery tracks. Technical Oversight -Ensure scalable and maintainable design using: o Frontend: React o Mobile: React Native o Backend: Node.js, Spring Boot o Low-Code: Unify Apps o Data: Postgres, Snowflake o Messaging: Kafka o Infrastructure: AWS Vendor Management -Oversee daily collaboration with T&M vendor teams across both high-code and lowcode streams. -Ensure timely delivery, quality, and knowledge handover from vendors. -Track vendor KPIs and optimize team allocation as needed. Agile Execution & Collaboration -Work closely with Product Managers, QA, Infra, and Security to deliver features aligned with business priorities. -Run Agile ceremonies (sprint planning, standups, retros) and monitor delivery velocity. -Maintain clear documentation and ensure traceability of work. ✅ Required Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 8–12 years of software engineering experience with 3+ years managing delivery teams. Strong technical background in: o React, React Native, Node.js, Spring Boot o Low-code platforms – especially Unify Apps o Microservices, Kafka, AWS, Postgres, Snowflake Demonstrated ability to manage hybrid teams (internal + vendors) and full-cycle delivery. 🌟 Preferred Skills -Experience delivering both low-code and high-code applications at scale. -Knowledge of DevOps practices, CI/CD, Git workflows, and observability. -Strong planning, estimation, and communication skills. -Experience working in high-availability or operational environments (e.g., QSR, retail, e-commerce). Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role Overview We are looking for a Senior Backend Engineer with deep expertise in Python and scalable system architecture. This is a hands-on individual contributor (IC) role where you’ll design and develop high-performance, cloud-native backend services for enterprise-scale platforms. You’ll work closely with cross-functional teams to deliver robust, production-grade solutions. Key Responsibilities Design and build distributed, microservices-based systems using Python Develop RESTful APIs, background workers, schedulers, and scalable data pipelines Lead architecture discussions, technical reviews, and proof-of-concept initiatives Model data using SQL and NoSQL technologies (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Ensure high availability and observability using tools like CloudWatch, Grafana, and Datadog Automate infrastructure and CI/CD workflows using Terraform, GitHub Actions, or Jenkins Prioritize security, scalability, and fault-tolerance across all services Own the entire lifecycle of backend components—from development to production support Document system architecture and contribute to internal knowledge sharing Requirements 10+ years of backend development experience with strong Python proficiency Deep understanding of microservices, Docker, Kubernetes, and cloud-native development (AWS preferred) Expertise in API design, authentication (OAuth2), rate limiting, and best practices Experience with message queues and async systems (Kafka, SQS, RabbitMQ) Strong database knowledge—both relational and NoSQL Familiarity with DevOps tools: Terraform, CloudFormation, GitHub Actions, Jenkins Effective communicator with experience working in distributed, fast-paced teams Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Requirements: 7+ years of hands-on Python development experience Proven experience designing and leading scalable backend systems Expert knowledge of Python and at least one framework (e.g., Django, Flask) Familiarity with ORM libraries and server-side templating (Jinja2, Mako, etc.) Strong understanding of multi-threading, multi-process, and event-driven programming Proficient in user authentication, authorization, and security compliance Skilled in frontend basics: JavaScript, HTML5, CSS3 Experience designing and implementing scalable backend architectures and microservices Ability to integrate multiple databases, data sources, and third-party services Proficient with version control systems (Git) Experience with deployment pipelines, server environment setup, and configuration Ability to implement and configure queueing systems like RabbitMQ or Apache Kafka Write clean, reusable, testable code with strong unit test coverage Deep debugging skills and secure coding practices ensuring accessibility and data protection compliance Optimize application performance for various platforms (web, mobile) Collaborate effectively with frontend developers, designers, and cross-functional teams Lead deployment, configuration, and server environment efforts Show more Show less
Posted 1 day ago
5.0 - 7.0 years
0 Lacs
Udaipur, Rajasthan, India
Remote
At GKM IT , we thrive on solving complex problems with clean, scalable code. We’re looking for a Python Engineer - Senior II who’s passionate about backend architecture, micro services, and clean design principles. As a Senior II-level Engineer, you’ll be working at the intersection of performance, scalability, and reliability—designing services that power real products, used by real people. You'll be hands-on with modern Python frameworks and contribute significantly to architectural decisions that shape our platform's future. If building efficient, production-grade Python systems makes you excited, this is your next big move. Requirements 5 to 7 years of hands-on experience with Python-based development Expert-level proficiency in Python Strong background in backend development, microservices, and API design Experience with at least one web framework: Django, Flask, or equivalent Hands-on experience with ORM libraries and relational data modeling Solid understanding of asynchronous programming, event-driven architecture, and queue systems (e.g., RabbitMQ, Kafka) Familiarity with HTML5, CSS3, JavaScript for front-end integration Understanding of security compliance, data validation, and input sanitation Ability to write modular, reusable, and testable code Strong knowledge of version control tools (e.g., Git) and best practices Comfortable working in a collaborative, agile development environment Design, develop, and maintain highly scalable, high-performance Python microservices Collaborate with cross-functional teams to implement microservice architecture best practices Integrate queueing systems for asynchronous communication Work with ORMs to manage multiple databases and data sources Design database schemas aligned with business processes and application needs Ensure strong unit testing, debugging, and maintainability of services Apply threading, multiprocessing, and server-side templating knowledge (e.g., Jinja2, Mako) Build systems compliant with security, performance, and accessibility standards Handle user authentication and authorization flows across multiple systems Optimize applications for different platforms (desktop, mobile) Collaborate closely with front-end teams and designers to implement user requirements Follow version control best practices and participate in code reviews Take part in deploying, configuring, and supporting production applications Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated. Show more Show less
Posted 1 day ago
5.0 - 7.0 years
0 Lacs
Udaipur, Rajasthan, India
Remote
At GKM IT , we’re passionate about building seamless digital experiences powered by robust and intelligent data systems. We’re on the lookout for a Data Engineer - Senior II to architect and maintain high-performance data platforms that fuel decision-making and innovation. If you enjoy designing scalable pipelines, optimising data systems, and leading with technical excellence, you’ll thrive in our fast-paced, outcome-driven culture. You’ll take ownership of building reliable, secure, and scalable data infrastructure—from streaming pipelines to data lakes. Working closely with engineers, analysts, and business teams, you’ll ensure that data is not just available, but meaningful and impactful across the organization. Requirements 5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability Collaborate with stakeholders to define data strategies, architecture, and KPIs Lead code reviews and enforce best practices Mentor junior and mid-level engineers Optimize query performance, data storage, and infrastructure Integrate CI/CD workflows for data deployment and automated testing Evaluate and implement new tools and technologies as required Demonstrate expert-level proficiency in Python and SQL Possess deep knowledge of distributed systems and data processing frameworks Be proficient in cloud platforms (AWS, GCP, or Azure), containerization, and CI/CD processes Have experience with streaming platforms like Kafka or Kinesis and orchestration tools Be highly skilled with Airflow, DBT, and data warehouse performance tuning Exhibit strong leadership, communication, and mentoring skills Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated. Show more Show less
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. Zafin is privately owned and operates out of multiple global locations including North America, Europe, and Australia. Zafin is backed by significant financial partners committed to accelerating the company's growth, fueling innovation and ensuring Zafin's enduring value as a leading provider of cloud-based solutions to the financial services sector. Zafin is proud to be recognized as a top employer. In Canada, UK and India, we are certified as a "Great Place to Work". The Great Place to Work program recognizes employers who invest in and value their people and organizational culture. The company's culture is driven by strategy and focused on execution. We make and keep our commitments. What is the opportunity? Reporting to the Senior Vice President, the Senior Technology and Integration Services, the Integration Consultant will work in multiple client-facing projects to enable SaaS onboarding. You will act as the technical go-to person for SaaS onboarding and a thought leader on Zafin integration paradigms and its best practices across multiple cloud platforms. You will have ample opportunities to field your best communication, technical, and organizational skills to take a SaaS project from initiation to go-live. You will represent client interests internally to enhance process, product, documentation, and delivery experience. You will interact with senior technical leaderships on the client-side and present and align them with solution approaches. You will interact with multiple internal stakeholders to achieve alignment across various project goals and delivery targets. Location: India . What will you do? Interact and collaborate with customers and partners to define the integration landscape. Define the logical sequence of integration activities for a SaaS onboarding project. Coordinate with the product development team to implement recommended integration strategies. Improve overall project delivery experience and go-live time by improving process and documentation. Support cloud infrastructure and system components required for integration. Lead the identification, isolation, resolution, and communication of issues within a client environment. What do I need to succeed? Must have: Worked on at least one end to end SaaS implementation project. 4 to 6 years of application and data integration experience. Experience with clustering and high availability configurations. Agile experience. Designed an end-to-end scalable microservices-based integration solution. Broad exposure to different technology stacks involved in a SaaS delivery model. Broad and in-depth knowledge of: Microservices design patterns , service orchestration, and inter-service communication (REST, gRPC, message queues). data integration concepts and tools network protocol stacks and related integration paradigms security postures in integration technology stacks API design and API based integration Azure , AWS and GCP public cloud platforms and provided services and their integration approaches Integration frameworks used by SaaS Strong knowledge and experience with the Kafka Connect Framework including working with multiple connector types: HTTP, RESTful APIs, JMS. Skilled technical documenter. Solution designer at heart. Experience in using modeling tools to create effective architecture views. Strong organizational, analytical, critical thinking, and debugging skills. Excellent communication skills. Ability to break down complex technical and functional requirements and effectively articulate / communicate them to different stakeholders involved in a project. Self-starter willing to get involved in all aspects of solution delivery including implementation and process improvement. Broad picture minded - one who sees the end-to-end solution of a project. Nice to have: Domain knowledge of banking and financial institutions and/or large enterprise IT environment. Strong delivery experience with geographically distributed delivery and client teams. Strong knowledge and hands-on experience with setting up and configuration of Kafka brokers. What's in it for you? If you work with us, we expect you'll show the spirit, drive, and intellect that makes you great. We offer a challenging, team-oriented work environment, competitive remuneration and benefits, and excellent opportunities for professional and personal growth. If you thrive in a high-energy, entrepreneurial environment, we invite you to share your passion, ideas and excitement at Zafin. Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting both the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods in which Zafin collects, uses, stores, handles, retains or discloses applicant information can be accessed by reviewing Zafin's privacy policy at https://zafin.com/privacy/. What's in it for you Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement. Want to learn more about what you can look forward to during your career with us? Visit our careers site and our openings: zafin.com/careers Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods by which Zafin contains uses, stores, handles, retains, or discloses applicant information can be accessed by reviewing Zafin's privacy policy at https://zafin.com/privacy-notice/. By submitting a job application, you confirm that you agree to the processing of your personal data by Zafin described in the candidate privacy notice. Show more Show less
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Mohali district, India
On-site
About Antier Solutions Antier Solutions is a leading technology solutions provider offering high-quality software development, blockchain development, and consulting services to businesses globally. With a strong emphasis on innovation and problem-solving, we help our clients achieve their digital transformation goals by creating cutting-edge solutions across industries. Job Overview Antier Solutions is looking for an experienced and dynamic Python Ai Developer to join our development team. The ideal candidate will be responsible for developing, testing, and maintaining Python-based applications and solutions. You will work closely with other developers, designers, and stakeholders to create efficient, scalable, and high-performing systems. Key Responsibilities Design, develop, and maintain Python applications and services. Write reusable, testable, and efficient code. Collaborate with cross-functional teams to define, design, and ship new features. Optimize applications for maximum speed and scalability. Implement automated testing (unit tests, integration tests) to ensure the reliability of code. Troubleshoot, debug, and upgrade existing systems. Work with databases (SQL and NoSQL) and integrate APIs. Stay up to date with the latest industry trends and best practices. Collaborate in agile development processes and participate in sprint planning, standups, and code reviews. Ensure compliance with security best practices and data protection laws. Mentor junior developers and provide technical guidance where necessary. Required Skills and Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 3-6 years of proven experience in Python development. Strong knowledge of Python frameworks such as Django, Flask, or FastAPI. Hands-on experience with RESTful API development and integration. Proficiency in working with relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB). Solid understanding of data structures, algorithms, and software design principles. Knowledge of version control systems (Git, SVN). Familiarity with front-end technologies like HTML, CSS, and JavaScript is a plus. Experience with cloud services (AWS, Azure, GCP) is a plus. Understanding of containerization technologies (Docker, Kubernetes) is a plus. Strong problem-solving and analytical skills. Ability to work both independently and as part of a team in a fast-paced environment. Excellent communication and collaboration skills. Preferred Skills Experience with microservices architecture. Knowledge of Agile methodologies and version control systems like Git. Familiarity with CI/CD pipelines and DevOps practices. Experience with message brokers like RabbitMQ or Kafka. Exposure to machine learning, data science, or artificial intelligence is a plus Interested Candidates can also share the Resume at shikha.rana@antiersolutions.com Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.
These cities are known for their thriving tech industries and have a high demand for Kafka professionals.
The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.
Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.
In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture
As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2