Home
Jobs

71 Zookeeper Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

15 Lacs

India

On-site

Key Responsibilities: Architect, design, and optimize enterprise-grade NiFi data flows for large-scale ingestion, transformation, and routing. Manage Kafka clusters at scale (multi-node, multi-datacenter setups), ensuring high availability, fault tolerance, and maximum throughput. Create custom NiFi processors and develop advanced flow templates and best practices. Handle advanced Kafka configurations — partitioning, replication, producer tuning, consumer optimization, rebalancing, etc. Implement stream processing using Kafka Streams and manage Kafka Connect integrations with external systems (databases, APIs, cloud storage). Design secure pipelines with end-to-end encryption, authentication (SSL/SASL), and RBAC for both NiFi and Kafka. Proactively monitor and troubleshoot performance bottlenecks in real-time streaming environments. Collaborate with infrastructure teams for scaling, backup, and disaster recovery planning for NiFi/Kafka. Mentor junior engineers and enforce best practices for data flow and streaming architectures. Required Skills and Qualifications: 5+ years of hands-on production experience with Apache NiFi and Apache Kafka . Deep understanding of NiFi architecture (flow file repository, provenance, state management, backpressure handling). Mastery over Kafka internals — brokers, producers/consumers, Zookeeper (or KRaft mode), offsets, ISR, topic configurations. Strong experience with Kafka Connect , Kafka Streams , Schema Registry , and data serialization formats (Avro, Protobuf, JSON). Expertise in tuning NiFi and Kafka for ultra-low latency and high throughput . Strong scripting and automation skills (Shell, Python, Groovy, etc.). Experience with monitoring tools : Prometheus, Grafana, Confluent Control Center, NiFi Registry, NiFi Monitoring dashboards. Solid knowledge of security best practices in data streaming (encryption, access control, secret management). Hands-on experience deploying on cloud platforms (AWS MSK, Azure Event Hubs, GCP Pub/Sub with Kafka connectors). Bachelor's or Master’s degree in Computer Science, Data Engineering, or equivalent field. Preferred (Bonus) Skills: Experience with containerization and orchestration (Docker, Kubernetes, Helm). Knowledge of stream processing frameworks like Apache Flink or Spark Streaming. Contributions to open-source NiFi/Kafka projects (a huge plus!). Soft Skills: Analytical thinker with exceptional troubleshooting skills. Ability to architect solutions under tight deadlines. Leadership qualities for guiding and mentoring engineering teams. Excellent communication and documentation skills. pls send your resume on hr@rrmgt.in or call me on 9081819473. Job Type: Full-time Pay: From ₹1,500,000.00 per year Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Are you ready to make your mark with a true industry disruptor? ZineOne, a subsidiary of Session AI, the pioneer of in-session marketing, is looking to add talented team members to help us grow into the premier revenue tool for e-commerce. We work with some of the leading brands nationwide and we innovate how brands connect with and convert customers. Job Description This position offers a hands-on, technical opportunity as a vital member of the Site Reliability Engineering Group. Our SRE team is dedicated to ensuring that our Cloud platform operates seamlessly, efficiently, and reliably at scale. The ideal candidate will bring over five years of experience managing cloud-based Big Data solutions, with a strong commitment to resolving operational challenges through automation and sophisticated software tools. Candidates must uphold a high standard of excellence and possess robust communication skills, both written and verbal. A strong customer focus and deep technical expertise in areas such as Linux, automation, application performance, databases, load balancers, networks, and storage systems are essential. Key Responsibilities: As a Session AI SRE, you will: Design and implement solutions that enhance the availability, performance, and stability of our systems, services, and products Develop, automate, and maintain infrastructure as code for provisioning environments in AWS, Azure, and GCP Deploy modern automated solutions that enable automatic scaling of the core platform and features in the cloud Apply cybersecurity best practices to safeguard our production infrastructure Collaborate on DevOps automation, continuous integration, test automation, and continuous delivery for the Session AI platform and its new features Manage data engineering tasks to ensure accurate and efficient data integration into our platform and outbound systems Utilize expertise in DevOps best practices, shell scripting, Python, Java, and other programming languages, while continually exploring new technologies for automation solutions Design and implement monitoring tools for service health, including fault detection, alerting, and recovery systems Oversee business continuity and disaster recovery operations Create and maintain operational documentation, focusing on reducing operational costs and enhancing procedures Demonstrate a continuous learning attitude with a commitment to exploring emerging technologies Preferred Skills: Experience with cloud platforms like AWS, Azure, and GCP, including their management consoles and CLI Proficiency in building and maintaining infrastructure on: AWS using services such as EC2, S3, ELB, VPC, CloudFront, Glue, Athena, etc Azure using services such as Azure VMs, Blob Storage, Azure Functions, Virtual Networks, Azure Active Directory, Azure SQL Database, etc GCP using services such as Compute Engine, Cloud Storage, Cloud Functions, VPC, Cloud IAM, BigQuery, etc Expertise in Linux system administration and performance tuning Strong programming skills in Python, Bash, and NodeJS In-depth knowledge of container technologies like Docker and Kubernetes Experience with real-time, big data platforms including architectures like HDFS/Hbase, Zookeeper, and Kafka Familiarity with central logging systems such as ELK (Elasticsearch, LogStash, Kibana) Competence in implementing monitoring solutions using tools like Grafana, Telegraf, and Influx Benefits Comparable salary package and stock options Opportunity for continuous learning Fully sponsored EAP services Excellent work culture Opportunity to be an integral part of our growth story and grow with our company Health insurance for employees and dependents Flexible work hours Remote-friendly company Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Noida

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What you'll do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Overview: The ideal candidate will have a solid foundation in Java programming, with additional exposure to Scala and Python being a plus. This role requires an understanding of data modeling concepts (such as with UML), and experience with Terraform-based infrastructure deployment (such as in AWS). Familiarity with messaging systems such as Kafka and IBM MQ, streaming processing technologies such as Flink, Flume, Spark or Ray, and knowledge of Spring and Spring Boot frameworks, is important. Experience with caching systems, in-memory databases like RocksDB or ElastiCache, and distributed caching is beneficial. The role also involves working with distributed system design, as well as understanding synchronous and asynchronous messaging principles and design. Key Responsibilities: Develop and maintain Java applications, with a preference for Java versions 11, 17, and 21. Utilize Scala and Python for specific project requirements as needed. Design and implement data models using UML concepts. Deploy and manage infrastructure using Terraform in AWS environments. Work with messaging systems, including Kafka and IBM MQ, to ensure efficient data communication. Implement solutions using Spring and Spring Boot frameworks. Manage caching systems and in-memory databases, ensuring optimal performance. Contribute to the design and development of distributed systems, leveraging technologies like Zookeeper and Kafka. Apply synchronous and asynchronous messaging principles in system design. Utilize serialization formats such as Protobuf, Avro, and FlatBuffer as applicable. Work with data formats like Parquet and Iceberg, and understand data warehouse and lakehouse concepts. Candidate Profile: Strong fundamentals in Java programming, with exposure to Scala and Python as a bonus. Fast learner with the ability to adapt to new technologies and methodologies. Creative thinker, open to innovative solutions beyond conventional approaches. Proactive and independent, capable of taking initiative and driving projects forward. Strong communication skills, able to collaborate effectively with cross-functional teams. Preferred Qualifications: Experience with Java versions 11, 17, and 21. Familiarity with Protobuf, Avro, and FlatBuffer serialization formats. Understanding of Parquet and Iceberg table formats. Knowledge of data warehouse and lakehouse concepts Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 10+ Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What You'll Do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 3 days ago

Apply

0 years

9 Lacs

Bengaluru

On-site

Associate - Production Support Engineer Job ID: R0388741 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-12 Location: Bangalore Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your skills and experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste 5- 7 years of Confluent Kafka platform experience Administration of Confluent Kafka Platform on Prem and in the Cloud Knowledge of Confluent Kafka operations Administration of Topics, Partitions, Consumer groups and kSQL queries to maintain optimal performance. Knowledge of Kafka ecosystem including Kafka Brokers, Zookeeper/KRaft, kSQL, Connectors, Schema Registry, Control Center, and platform interoperability. Knowledge of Kafka Cluster Linking and replication Experience with administering Multi Regional Confluent Clusters System Performance: Knowledge of performance tuning of messaging systems and clients to meet application requirements. Operating Systems: RedHat Linux Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your Skills And Experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Description Summary of This Role Collaborates with clients and other functional areas SMEs in the design of IT Roadmaps items to illustrate architectural complexities and interactions of information systems. Analyzes, refines and documents the business requirements of the client. Analyzes existing systems to detect critical deficiencies and recommend solutions for improvement. Plans and designs information systems and implements updates within scope of established guidelines and objectives. Researches new technological advances to assess current practices for compliance with systems requirements. Recommends solutions to address current system needs, process improvements and controls. . Makes recommendations for future information system needs. Provides technical architecture and support across applications and guidance to other functional areas to define software/hardware requirements and in planning and delivering infrastructure. Analyzes infrastructure and capacity planning. Employs a thorough knowledge of required procedures, methodologies and/or application standards, including Payment Card Industry (PCI) and security related compliance to write or modify software programs to include analysis, writing specifications and code, program installation and documentation for use with multiple application/user database systems. Maintains information systems by configuring software and hardware, tracking errors and data movement, and troubleshooting. Collaborate with engineers across the Core and Team to create technical designs, develop, test and solve complex problems that drive the solution from initial concept to production. Contribute to our Automated build, deploy and test processes for each solution. Work in an iterative manner that fits well with the development practices and pace within the team, with focus on a fail fast approach. Demo your work for colleagues and members of the business team. Conduct research on new and interesting technologies that help to progress our products and platforms. Create mechanisms/architectures that enable rapid recovery, repair and cleanup of existing solutions with good understanding of fault tolerance and failure domains. Identify opportunities to deliver self service capability for the most common infrastructure and application management tasks. Create automated tests that easily plug into our automated code pipeline. Provide deep and detailed levels of monitoring across all levels of the application. Attend sessions, seminars and be an evangelist for the latest technology. Lead , help mentor other engineers and technical analysts. Plan sprints within your project team to keep yourself and the team moving forward. What Are We Looking For in This Role? Minimum Qualifications MCA, B. Tech. or B.E. (four year college degree) or equivalent. Typically minimum of 8 years - Professional Experience In Coding, Designing, Developing And Analyzing Data. Typically has an advanced knowledge and use of one or more back end languages / technologies and a moderate understanding of the other corresponding end language / technology from the following but not limited to; two or more modern programming languages used in the enterprise, experience working with various APIs, external Services, experience with both relational and NoSQL Databases. Preferred Qualifications Experience 8- 10 Years B.Tech / Master's Degree ( Regular) What Are Our Desired Skills and Capabilities? Supervision - Determines methods and procedures on new assignments and may coordinate activities of other personnel (Team Lead). Experience of working on SOA Architecture, Microservices Architecture, Event drives and serverless architectures. Good Knowledge of JAVA/JEE Design Patterns, Enterprise Integration Design Patterns ,SOA Design Patterns, MicroServices Design Patterns. Experience of working on RestFull services, SOAP WebServices, gRPC , Async & streaming technologies. Experience of working on Java 1.8 +, Spring 4.x +, Spring Boot, Spring data, SpringREST, Spring MVC, Spring-integration (i.e. no EJB :), Tomcat 8.5.x (embedded version), JUnit + Spring-test, application stack Experience of working on ORM / Persistence frameworks or technologies like Hibernate , MyBatis, iBatis Experience on designing and developing Fault Tolerant , HA systems Good hands on experience on AWS stack and services like S3,EC2, KMS, EKS, MSK, Lambda, Iam, RDS, Dynamo,Cloudwatch Good hands on experience on Cloud Native projects like Prometheus, Grafana, Argo, Harbour, Helm, Istio, K8S etc Good experience of working on Agile development model and Automation Test driven development (TDD) methodologies. Good experience of using container technology to build out an automated platform architecture that allows for seamless deployment between on-premise and external cloud environments Good experience of leveraging open technology such as Docker, Kubernetes, Terraform, Bash, Javascript, Python, Git, Jenkins, Linux, HAProxy, AWS Cloud, ELK, Java, Kafka, MongoDB, Zookeeper, and AWS Amazon Web Service (EC2 Container Service, Cloud Formation, Elastic Load Balancer, Auto scaling Group). Good experience of Integrating systems using a wide variety of protocols like REST, SOAP, MQ, TCP/IP, JSON and others Good experience of designing and building automated code deployment systems that simplify development work and make our work more consistent and predictable. Exhibit a deep understanding of server virtualization, networking and storage ensuring that the solution scales and performs with high availability and uptime. Soft Skills : Is Adaptable, Result oriented, portrays a positive attitude, Flexible & Multi Task orientated. Is able to accept guidance and is a good listener. Has Good oral and written communication skills Has ability to understand business needs and translate them into technology solutions. Has strong research and problem resolution skills Is a strong Team Player, with good time management, interpersonal & presentation skills. Has strong customer focus & understands external and internal customer expectations Is able to articulate Technical solutions in language understood by business users. Has a go getter attitude to handle challenging development tasks. Can drive Change and has a good Innovation track record. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

What is your Role? You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that often interacts with customers on various aspects related to technical support for deployed system. You will be deputed at customer premises to assist customers for issues related to System and Hadoop administration. You will Interact with QA and Engineering team to co-ordinate issue resolution within the promised SLA to customer. What will you do? Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem. Installing Linux Operating System and Networking. Writing Unix SHELL/Ansible Scripting for automation. Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc. Taking care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time. Maintaining HBASE Clusters and capacity planning. Maintaining SOLR Cluster and capacity planning. Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected. Manage KVM Virtualization environment. Show more Show less

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Key Responsibilities Lead the deployment, configuration, and ongoing administration of Hortonworks, Cloudera, and Apache Hadoop/Spark ecosystems. Maintain and monitor core components of the Hadoop ecosystem including Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, and HBASE. Take charge of the day-to-day running of Hadoop clusters using tools like Ambari, Cloudera Manager, or other monitoring tools, ensuring continuous availability and optimal performance. Manage and provide expertise in HBASE Clusters and SOLR Clusters, including capacity planning and performance tuning. Perform installation, configuration, and troubleshooting of Linux Operating Systems and network components relevant to big data environments. Develop and implement automation scripts using Unix SHELL/Ansible Scripting to streamline operational tasks and improve efficiency. Manage and maintain KVM Virtualization environments. Oversee clusters, storage solutions, backup strategies, and disaster recovery plans for big data infrastructure. Implement and manage comprehensive monitoring tools to proactively identify and address system anomalies and performance bottlenecks. Work closely with database teams, network teams, and application teams to ensure high availability and expected performance of all big data applications. Interact directly with customers at their premises to provide technical support and resolve issues related to System and Hadoop administration. Coordinate closely with internal QA and Engineering teams to facilitate issue resolution within promised Skills & Qualifications : Experience : 5-8 years of strong individual contributor experience as a DevOps, System, and/or Hadoop Domain Expertise : Proficient in Linux Administration. Extensive experience with Hadoop Infrastructure and Administration. Strong knowledge and experience with SOLR. Proficiency in Configuration Management tools, particularly Data Ecosystem Components : Must have hands-on experience and strong knowledge of managing and maintaining : Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem deployments. Core components like Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE. Cluster management tools such as Ambari and Cloudera : Strong scripting skills in one or more of Perl, Python, or Management : Strong experience working with clusters, storage solutions, backup strategies, database management systems, monitoring tools, and disaster recovery : Experience managing KVM Virtualization : Excellent analytical and problem-solving skills, with a methodical approach to debugging complex : Strong communication skills (verbal and written) with the ability to interact effectively with technical teams and : Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent relevant work experience. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

About The Job Job Description : We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles And Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills : Education : B.Tech in computer engineering, Information Technology, or related field. Experience GraphDB experience is must 5+ years of experience in a Technical Support Role on Data based Software Product at least L3 level. Linux Expertise : 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases : 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise : 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing : 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation : 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration : 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools : Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing : Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies : Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice To Have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-5 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Citi Global Wealth (previously Citi Private Bank) is seeking a highly motivated Senior developer to expand the existing global team. Candidate will work as a part of Strategic Data Platform– innovative technological solution to address complex data related needs. Project is in production but development of entire solution will take another couple of years – therefore work will be concentrated around new features. Technology stack: Scala, Akka, Cats Effects, http4s, NGINX, Neo4j, Spark, Kafka, Zookeeper Scala is a base language used in the project, however if you are an experienced Java developer who wants to learn Scala and have been exposed to functional programming – you will have a chance to ease into Scala programming as a member of this team. Required skills: At least 5 years professional experience in Java and/or Scala programmer roles Java 8 or newer and/or Scala For Java Devs: Functional programming concepts in Java For Scala Devs: Fluency in functional programming, or able to demostrate interest in learning and adopting functional programming Analytical, critical thinking and problem solving skills Experience with HTTP/REST services Experience with micro-services Communication and collaboration skills in English Skills considered a plus: Experience in using non-blocking IO Multi-threaded and parallel programming Experience With Gemfire/Geode Or Other Distributed Database Or Cache Object-oriented & Functional patterns Bash shell scripting What would you get in return: Opportunity for professional development in the international and multicultural organisation Develop programming skills using modern technologies Learn from peer experienced software engineers by working together on problem analysis and reviewing codebase. Attractive and stable employment conditions ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

About the role Software Development Engineer(SDE III) - Managed Kafka as a service We are seeking experienced builders for our Managed Kafka as a Service; a fully managed service that makes it easy for Tesco's development teams to build and run applications that use Apache Kafka to process streaming data. We are looking for engineers who are enthusiastic about data streaming; and are as passionate about solving problems at scale. As a member of the Managed Kafka as a service team; you will be making contributions to the entire stack - the APIs; the core Kafka platform; and stand-alone tools that makes it easier to administer Kafka better. You will design and build new features; make performance improvements; identify and investigate new technologies; prototype solutions; build scalable services; and test and review changes; to deliver an exceptional customer experience. The ideal candidate has experience designing large-scale systems supporting millions of transactions per second; enjoys solving complex software problems; and possesses analytical; design and problem-solvin g skills. Ideally you have an in-depth understanding of streaming data technologies such as Apache Kafka; and experience with open-source data processing frameworks like Apache Spark; Apache Flink; or Apache Storm. Your responsibiliti es will include collaborating with other engineers to build a large scale cloud services; and work with senior leaders to define your team's roadmap; including identifying design and code changes needed in the underlying open source platforms. Programming * Good Understanding of Java/J2EE programming language Microservices Spring Spring-Boot NoSQL Dependency Injection frameworks RESTful services Build tools etc. * Understands framework and enough of tool ecosystem of chosen language to implement end to end component with minimal assistance * Comfortable producing and refactoring code without assistance * Able to test drive features in programming language of choice * Understands different major language paradigms (OOP/Functiona l) * Understands presence of abstraction beneath language (JVM/CLR) * Can debug code * Can understand and resolve complex issues * Has strong knowledge of observability; alerting patterns. Kafka * Experience setting up Multi region Kafka clusters; mirror makers; DR; Zookeepers; KRaft; replication from scratch. * Hands-on experience with Kafka Brokers and understanding of underlying functionality. * Hands-on experience with Kafka streams and KSQL DB and understanding of underlying implementation and functionality. * Hands-on experience with Confluent or Apache Strimzi Kafka connectors and functionality. * Good understanding of Kafka client (Producer and consumer) functionality. * Ability to design and implement technical solutions for the Kafka On-prem and Cloud platform. * Deep knowledge of Kafka best practices and implementation experience. * Responsible for assisting producer and consumer applications to onboard on to Kafka. * Good experience with troubleshootin g Kafka platform issues. * Able to troubleshoot and support producer and consumer issues. Qualifications * Good Understanding of Java/J2EE programming language Microservices Spring Spring-Boot NoSQL Dependency Injection frameworks RESTful services Build tools etc. * Understands framework and enough of tool ecosystem of chosen language to implement end to end component with minimal assistance * Comfortable producing and refactoring code without assistance * Experience with one or more public cloud platforms. * Experience setting up Multi region Kafka clusters; mirror makers; DR; Zookeeper; KRaft; replication from scratch. * Hands-on experience with Kafka Brokers and understanding of underlying functionality. * Good to have experience with Kafka streams and KSQL DB and understanding of underlying implementation and functionality. You will be responsible for Software Development Engineer(SDE III) - Managed Kafka as a service We are seeking experienced builders for our Managed Kafka as a Service; a fully managed service that makes it easy for Tesco's development teams to build and run applications that use Apache Kafka to process streaming data. We are looking for engineers who are enthusiastic about data streaming; and are as passionate about solving problems at scale. As a member of the Managed Kafka as a service team; you will be making contributions to the entire stack - the APIs; the core Kafka platform; and stand-alone tools that makes it easier to administer Kafka better. You will design and build new features; make performance improvements; identify and investigate new technologies; prototype solutions; build scalable services; and test and review changes; to deliver an exceptional customer experience. The ideal candidate has experience designing large-scale systems supporting millions of transactions per second; enjoys solving complex software problems; and possesses analytical; design and problem-solvin g skills. Ideally you have an in-depth understanding of streaming data technologies such as Apache Kafka; and experience with open-source data processing frameworks like Apache Spark; Apache Flink; or Apache Storm. Your responsibiliti es will include collaborating with other engineers to build a large scale cloud services; and work with senior leaders to define your team's roadmap; including identifying design and code changes needed in the underlying open source platforms. Programming * Good Understanding of Java/J2EE programming language Microservices Spring Spring-Boot NoSQL Dependency Injection frameworks RESTful services Build tools etc. * Understands framework and enough of tool ecosystem of chosen language to implement end to end component with minimal assistance * Comfortable producing and refactoring code without assistance * Able to test drive features in programming language of choice * Understands different major language paradigms (OOP/Functiona l) * Understands presence of abstraction beneath language (JVM/CLR) * Can debug code * Can understand and resolve complex issues * Has strong knowledge of observability; alerting patterns. Kafka * Experience setting up Multi region Kafka clusters; mirror makers; DR; Zookeepers; KRaft; replication from scratch. * Hands-on experience with Kafka Brokers and understanding of underlying functionality. * Hands-on experience with Kafka streams and KSQL DB and understanding of underlying implementation and functionality. * Hands-on experience with Confluent or Apache Strimzi Kafka connectors and functionality. * Good understanding of Kafka client (Producer and consumer) functionality. * Ability to design and implement technical solutions for the Kafka On-prem and Cloud platform. * Deep knowledge of Kafka best practices and implementation experience. * Responsible for assisting producer and consumer applications to onboard on to Kafka. * Good experience with troubleshootin g Kafka platform issues. * Able to troubleshoot and support producer and consumer issues. Qualifications * Good Understanding of Java/J2EE programming language Microservices Spring Spring-Boot NoSQL Dependency Injection frameworks RESTful services Build tools etc. * Understands framework and enough of tool ecosystem of chosen language to implement end to end component with minimal assistance * Comfortable producing and refactoring code without assistance * Experience with one or more public cloud platforms. * Experience setting up Multi region Kafka clusters; mirror makers; DR; Zookeeper; KRaft; replication from scratch. * Hands-on experience with Kafka Brokers and understanding of underlying functionality. * Good to have experience with Kafka streams and KSQL DB and understanding of underlying implementation and functionality. You will need Refer you will be responsible section Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Your fixed pay is the guaranteed pay as per your contract of employment. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. At Tesco, inclusion is at the heart of everything we do. We believe in treating everyone fairly and with respect, valuing individuality to create a true sense of belonging. It’s deeply embedded in our values — we treat people how they want to be treated. Our goal is to ensure all colleagues feel they can be themselves at work and are supported to thrive. Across the Tesco group, we are building an inclusive workplace that celebrates the diverse cultures, personalities, and preferences of our colleagues — who, in turn, reflect the communities we serve and drive our success. At Tesco India, we are proud to be a Disability Confident Committed Employer, reflecting our dedication to creating a supportive and inclusive environment for individuals with disabilities. We offer equal opportunities to all candidates and encourage applicants with disabilities to apply. Our fully accessible recruitment process includes reasonable adjustments during interviews - just let us know what you need. We are here to ensure everyone has the chance to succeed. We believe in creating a work environment where you can thrive both professionally and personally. Our hybrid model offers flexibility - spend 60% of your week collaborating in person at our offices or local sites, and the rest working remotely. We understand that everyone’s journey is different, whether you are starting your career, exploring passions, or navigating life changes. Flexibility is core to our culture, and we’re here to support you. Feel free to talk to us during your application process about any support or adjustments you may need. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Bounteous-Accolite Accolite is a leading Digital Engineering, Cloud and Data & AI services provider that delivers robust digital transformation initiatives to Global 2000 customers. Accolite provides these services to the banking and financial services, insurance, technology, media and telecom, healthcare, and logistics industries. Accolite has 3,000 professionals globally and a presence across the United States, Canada, Mexico, Europe, and India. Accolite and Bounteous Join Forces, Forming Global Leader in Digital Transformation Services Merger strengthens end-to-end digital experience, commerce, and engineering capabilities on a global scale The combined company will be headquartered out of Chicago with offices across North America, Europe, Asia and will be 5,000 people strong , with 1,200+ in North America, 3400 + in APAC and 400+ in Europe. Post the merger, the company serves over 300 Fortune 1000 and high growth clients, solving their mission critical problems. With this merger, the company will be amongst the world’s leading digital transformation consultancies. For more information visit: https://www.accolite.com/. Experience: 8-10 Years Notice Period: 0-15 Days Required: Strong experience in DevOps covering o Release Automation o Blue/Green Deployment o Observability – OTel, Grafana, Loki, Prometheus/Cortex, Tempo 2.Exposure to these technologies o API and message-based architectures o Load balancer, ZooKeeper o Docker / Podman / Kubernetes o Jenkins pipeline / Github Actions o Cloud offerings such as Azure and AWS o IaC (desirable) o Configuration management tools such as Ansible, Chef (desirable) 3.Strong scripting experience in languages like Python, Groovy, Bash, PowerShell 4.Strong understanding of Linux systems, OS fundamentals and networking Show more Show less

Posted 1 week ago

Apply

0 years

3 - 5 Lacs

Pune

On-site

Pune About Us We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization. Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner. Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts. At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity How We Work? Define, Design and Deliver (D3) is our in-house delivery philosophy. It’s culled from agile and rapid methodologies and focused on ‘just enough design’. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success. We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value. Job Description We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes—ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • • Ensure platform uptime and application health as per SLOs/KPIs • • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • • Debug and resolve complex production issues, performing root cause analysis • • Automate routine tasks and implement self-healing systems • • Design and maintain dashboards, alerts, and operational playbooks • • Participate in incident management, problem resolution, and RCA documentation • • Own and update SOPs for repeatable processes • • Collaborate with L3 and Product teams for deeper issue resolution • • Support and guide L1 operations team • • Conduct periodic system maintenance and performance tuning • • Respond to user data requests and ensure timely resolution • • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • • Strong Linux fundamentals and scripting (Python, Shell) • • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • • Strong SQL skills (Oracle/Exadata preferred) • • Familiarity with DataHub, DataMesh, and security best practices is a plus SHIFT - 24/7

Posted 1 week ago

Apply

0 years

3 - 10 Lacs

Bengaluru

On-site

Employment Type Permanent Closing Date 13 June 2025 11:59pm Job Title IT Domain Specialist Job Summary As the IT Domain Specialist, your role is key in improving the stability and reliability of our cloud offerings and solutions to ensure continuity of service for our customers. You will be responsible for supporting the end-to-end development of key cloud platform and solutions which includes technical design, integration requirements, delivery and lifecycle management. You are a specialist across and/or within a technology domain and viewed as the go-to person in the business to provide technical support in the development and delivery of cloud infrastructure platforms and solutions. Job Description Who We Are Telstra is Australia’s leading telecommunications and technology company spanning over a century with a footprint in over 20+ countries. In India, we’re building a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’ve grown quickly since our inception in 2019, now with offices in Pune, Hyderabad and Bangalore. Focus of the Role Event Data Engineer role is to plan, coordinate, and execute all activities related to the requirements interpretation, design and implementation of Business intelligence capability. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical business systems with a focus on quality application development and delivery. What We Offer Performance-related pay Access to thousands of learning programs so you can level-up Global presence across 22 countries; opportunities to work where we do business. Up to 26 weeks maternity leave provided to the birth mother with benefits for all child births Employees are entitled to 12 paid holidays per calendar year Eligible employees are entitled to 12 days of paid sick / casual leave per calendar year Relocation support options across India, from junior to senior positions within the company Receive insurance benefits such as medical, accidental and life insurances What You’ll Do Experience in Analysis, Design, and Development in the fields of Business Intelligence, Databases and Web-based Applications. Experience in NiFi, Kafka, Spark, and Cloudera Platforms design and development. Experience in Alteryx Workflow development and Data Visualization development using Tableau to create complex, intuitive dashboards. In-depth understanding and experience in Cloudera framework includes CDP (Cloudera Data Platform). Experience in Cloudera manager to monitor Hadoop cluster and critical services . Hadoop administration ( Hive, Kafka, zookeeper etc.). Experience in data management including data integration, modeling, optimization and data quality. Strong knowledge in writing SQL and database management. Working experience in tools like Alteryx , KNIME will be added advantage. Implementing Data security and access control compliant to Telstra Security Standards Ability to review vendor designs and recommended solutions based on industry best practises Understand overall business operations and develops innovative solutions to help improve productivity Ability to understand and design provisioning solutions at Telstra and how Data lakes Monitor process of software configuration/development/testing to assure quality deliverable. Ensure standards of QA are being met Review deliverables to verify that they meet client and contract expectations; Implement and enforce high standards for quality deliverables Analyses performance and capacity issues of the highest complexity with Data applications. Assists leadership with development and management of new application capabilities to improve productivity Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production. About You Experience in data flow development and Data Visualization development to create complex, intuitive dashboards. Experience with Hortonworks Data Flow (HDF) this includes NiFi and Kafka experience with Cloudera Edge Big Data & Data Lake Experience Cloudera Hadoop with project implementation experience Data Analytics experience Data Analyst and Data Science exposure Exposure to various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, data modeling. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using data integration technologies Experience in supporting operations and knowledge of standard operation procedures: OS Patches, Security Scan, Log Onboarding, Agent Onboarding, Log Extraction etc. Development and deployment and scaling of containerised applications with Docker preferred. A good understanding of enterprise application integration, including SOA, ESB, EAI, ETL environments and an understanding of integration considerations such as process orchestration, customer data integration and master data management A good understanding of the security processes, standards & issues involved in multi-tier, multi-tenant web applications We're amongst the top 2% of companies globally in the CDP Global Climate Change Index 2023, being awarded an 'A' rating. If you want to work for a company that cares about sustainability, we want to hear from you. As part of your application with Telstra, you may receive communications from us on +61 440 135 548 (for job applications in Australia) and +1 (623) 400-7726 (for job applications in the Philippines and India). When you join our team, you become part of a welcoming and inclusive community where everyone is respected, valued and celebrated. We actively seek individuals from various backgrounds, ethnicities, genders and disabilities because we know that diversity not only strengthens our team but also enriches our work. We have zero tolerance for harassment of any kind, and we prioritise creating a workplace culture where everyone is safe and can thrive. As part of the hiring process, all identified candidates will undergo a background check, and the results will play a role in the final decision regarding your application. We work flexibly at Telstra. Talk to us about what flexibility means to you. When you apply, you can share your pronouns and / or any reasonable adjustments needed to take part equitably during the recruitment process. We are aware of current limitations with our website accessibility and are working towards improving this. Should you experience any issues accessing information or the application form, and require this in an alternate format, please contact our Talent Acquisition team on DisabilityandAccessibility@team.telstra.com.

Posted 1 week ago

Apply

0 years

3 - 10 Lacs

Bengaluru

On-site

Employment Type Permanent Closing Date 13 June 2025 11:59pm Job Title IT Domain Specialist Job Summary As the IT Domain Specialist, your role is key in improving the stability and reliability of our cloud offerings and solutions to ensure continuity of service for our customers. You will be responsible for supporting the end-to-end development of key cloud platform and solutions which includes technical design, integration requirements, delivery and lifecycle management. You are a specialist across and/or within a technology domain and viewed as the go-to person in the business to provide technical support in the development and delivery of cloud infrastructure platforms and solutions. Job Description Who We Are Telstra is Australia’s leading telecommunications and technology company spanning over a century with a footprint in over 20+ countries. In India, we’re building a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’ve grown quickly since our inception in 2019, now with offices in Pune, Hyderabad and Bangalore. Focus of the Role Event Data Engineer role is to plan, coordinate, and execute all activities related to the requirements interpretation, design and implementation of Business intelligence capability. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical business systems with a focus on quality application development and delivery. What We Offer Performance-related pay Access to thousands of learning programs so you can level-up Global presence across 22 countries; opportunities to work where we do business. Up to 26 weeks maternity leave provided to the birth mother with benefits for all child births Employees are entitled to 12 paid holidays per calendar year Eligible employees are entitled to 12 days of paid sick / casual leave per calendar year Relocation support options across India, from junior to senior positions within the company Receive insurance benefits such as medical, accidental and life insurances What You’ll Do Experience in Analysis, Design, and Development in the fields of Business Intelligence, Databases and Web-based Applications. Experience in NiFi, Kafka, Spark, and Cloudera Platforms design and development. Experience in Alteryx Workflow development and Data Visualization development using Tableau to create complex, intuitive dashboards. In-depth understanding and experience in Cloudera framework includes CDP (Cloudera Data Platform). Experience in Cloudera manager to monitor Hadoop cluster and critical services . Hadoop administration ( Hive, Kafka, zookeeper etc.). Experience in data management including data integration, modeling, optimization and data quality. Strong knowledge in writing SQL and database management. Working experience in tools like Alteryx , KNIME will be added advantage. Implementing Data security and access control compliant to Telstra Security Standards Ability to review vendor designs and recommended solutions based on industry best practises Understand overall business operations and develops innovative solutions to help improve productivity Ability to understand and design provisioning solutions at Telstra and how Data lakes Monitor process of software configuration/development/testing to assure quality deliverable. Ensure standards of QA are being met Review deliverables to verify that they meet client and contract expectations; Implement and enforce high standards for quality deliverables Analyses performance and capacity issues of the highest complexity with Data applications. Assists leadership with development and management of new application capabilities to improve productivity Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production. About You Experience in data flow development and Data Visualization development to create complex, intuitive dashboards. Experience with Hortonworks Data Flow (HDF) this includes NiFi and Kafka experience with Cloudera Edge Big Data & Data Lake Experience Cloudera Hadoop with project implementation experience Data Analytics experience Data Analyst and Data Science exposure Exposure to various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, data modeling. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using data integration technologies Experience in supporting operations and knowledge of standard operation procedures: OS Patches, Security Scan, Log Onboarding, Agent Onboarding, Log Extraction etc. Development and deployment and scaling of containerised applications with Docker preferred. A good understanding of enterprise application integration, including SOA, ESB, EAI, ETL environments and an understanding of integration considerations such as process orchestration, customer data integration and master data management A good understanding of the security processes, standards & issues involved in multi-tier, multi-tenant web applications We're amongst the top 2% of companies globally in the CDP Global Climate Change Index 2023, being awarded an 'A' rating. If you want to work for a company that cares about sustainability, we want to hear from you. As part of your application with Telstra, you may receive communications from us on +61 440 135 548 (for job applications in Australia) and +1 (623) 400-7726 (for job applications in the Philippines and India). When you join our team, you become part of a welcoming and inclusive community where everyone is respected, valued and celebrated. We actively seek individuals from various backgrounds, ethnicities, genders and disabilities because we know that diversity not only strengthens our team but also enriches our work. We have zero tolerance for harassment of any kind, and we prioritise creating a workplace culture where everyone is safe and can thrive. As part of the hiring process, all identified candidates will undergo a background check, and the results will play a role in the final decision regarding your application. We work flexibly at Telstra. Talk to us about what flexibility means to you. When you apply, you can share your pronouns and / or any reasonable adjustments needed to take part equitably during the recruitment process. We are aware of current limitations with our website accessibility and are working towards improving this. Should you experience any issues accessing information or the application form, and require this in an alternate format, please contact our Talent Acquisition team on DisabilityandAccessibility@team.telstra.com.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As an SDE 3 you will be responsible for solving complex problems, elevating engineering and operational excellence, and leading new tech discovery and adoption. You will ensure high standards of code and design quality, mentor junior developers, and proactively manage technical risks to ensure successful project delivery. Responsibilities: Solving complex, ambiguous problems at a team level. Up the bar for Engineering Excellence. Up the bar for Operational excellence at the team level. New Tech discovery for the team. New Tech Adoption within the team. Custodian of code and design quality of the team. Coach SDE1s and SDE2s within the team. Proactively identify tech risk and de-risk projects in the team. Bring a culture of learning and innovation to the team. Builds a platform that improves the MTTD and MTTR. Create solutions to a vision statement. Analyze and improve system performance. Guide the team on coding patterns, languages, and frameworks. Requirements: Btech/ Mtech CSE From Tier 1 College. Computer Science fundamentals, object-oriented programming, design patterns, data structures, and algorithm design. Proficiency with Java stack (Java/Java Design Patterns ). Building scalable microservices and distributed systems. 5+ years of experience contributing to architecture and design in a product setup. Total work experience of 7 to 10 years in contributing to architecture and design in a product setup. Technology/ Tools: Spring, Hibernate, RabbitMQ, Kafka, Zookeeper, Elasticsearch. REST APIs. Database: Cassandra, MongoDB, Redis, MS-SQL, MySQL. Hands-on experience in working on a large scale. Hands-on experience in Low- and High-Level Design ( LLD + HLD ). Proficient in multiple programming languages and technology stacks. Expert in doing high-level design. Expert in CI/CD capabilities required to improve efficiency. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Splunk, a Cisco company, is driving the future of digital resilience with a powerful, unified security and observability platform built for hybrid, multi-cloud environments. Our technology is trusted by the world’s leading organizations—but what truly sets us apart is our people. We celebrate individuality, curiosity, and purpose. If you’re a builder at heart with a passion for high-scale, mission-critical systems and a background in cybersecurity, we’d love to meet you. The Role As a Senior Software Engineer – Fullstack , you’ll bring end-to-end engineering expertise with a strong emphasis on scalable backend development and distributed systems, while also contributing to frontend enhancements and UI fixes . You’ll help develop intelligent, ML-driven security features and ensure our cloud-native applications are secure, performant, and resilient at scale. Working from India , you’ll collaborate with global engineering teams to build enterprise-grade solutions used by some of the world’s largest organizations. What You'll Do Design and build robust backend components for large-scale, distributed cybersecurity platforms using Java, Scala, Python, and Node.js. Tackle frontend development for smaller features and bug fixes using JavaScript and JQuery. Troubleshoot production issues across the full stack—from the database to the UI—and partner with customers and stakeholders to resolve them efficiently. Partner closely with customers to identify and resolve infrastructure pain points, while elevating data clarity and the value of delivered security insights. Build and maintain machine learning-driven applications that leverage big data technologies like Spark, Hadoop, Hive, and Impala for real-time cybersecurity insights. Develop and maintain CI/CD pipelines using GitLab, with automation for building, testing, and deploying secure, high-quality software. Write automated tests, drive code coverage, and conduct performance testing to ensure application reliability. Work on security compliance initiatives including FIPS 140-2/3 and STIG requirements. Collaborate across SRE, infrastructure, data, and security teams to improve performance, scalability, and observability. Monitor and analyze production systems using Splunk SPL, and other observability tools. What You Bring 8+ years of software development experience, with deep backend expertise and full stack exposure. Strong programming skills in Java, Scala, Python, Node.js, and scripting languages like Shell. Hands-on experience with Linux environments including Red Hat, Ubuntu, and Oracle Enterprise Linux. Proven experience with distributed systems, Kafka, Zookeeper, and Protobuf. Expertise in containerization and orchestration using Docker and Kubernetes. Proficient with GitLab CI/CD, infrastructure automation, and test frameworks. Familiarity with frontend technologies including JavaScript and JQuery. Understanding of security frameworks such as FIPS and STIG. Strong analytical and communication skills; able to explain complex issues to technical and non-technical audiences. Agile development experience and ability to work effectively across global, cross-functional teams. Nice to Have Familiarity with Splunk SPL, and other observability tools. Experience developing ML-based applications in the cybersecurity space. Exposure to performance tuning, incident response, and monitoring best practices in production environments. We value diversity, equity, and inclusion at Splunk and are an equal employment opportunity employer. Qualified applicants receive consideration for employment without regard to race, religion, color, national origin, ancestry, sex, gender, gender identity, gender expression, sexual orientation, marital status, age, physical or mental disability or medical condition, genetic information, veteran status, or any other consideration made unlawful by federal, state, or local laws. We consider qualified applicants with criminal histories, consistent with legal requirements. Note Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 5+ years Extensive experience in Java 8 or higher, Spring Framework-Core/Boot/MVC, Hibernate/JPA, and Microservices Architecture. Proficient in Github Copilot Prompt based coding. Hands on experience of RDBMS like SQL Server, Oracle, MySQL, PostgreSQL. Strong backend development skills with technologies and languages including Node.js, Fastify, Express, Java, Micronaut, and databases such as MongoDB, Elasticsearch, and PostgreSQL. Expertise in writing high-quality code following object-oriented design principles with a strong balance between performance, extensibility, and maintainability. Experience in SOA based architecture, Web Services (Apache/CXF/JAXWS/JAXRS/SOAP/REST). Hands-on Experience in Low- and High-Level Design (LLD + HLD). Strong working experience in RabbitMQ, Kafka, Zookeeper, REST APIs. Expert in CI/CD capabilities required to improve efficiency. Hands-on experience deploying applications to hosted data centers or cloud environments using technologies such as Docker, Kubernetes, Jenkins, Azure DevOps and Google Cloud Platform. A good understanding of UML and design patterns Ability to simplify solutions, optimize processes, and resolve escalated issues efficiently. Strong problem-solving skills and a passion for continuous improvement. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

8.0 - 14.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

What's the role? Smart – We are looking for someone who understands how all software components fit together. From persisting data to message processing, you understand how to create a highly scalable, highly available system deployed in the cloud (Spark, Kafka, Zookeeper and all things AWS ). You have a good understanding of Linux and container systems ( Docker , ECS, Kubernetes ). Fast learning – Never Stop Learning! We are hiring Software Engineers who love to learn new things. You have advanced knowledge of a statically typed Objected Oriented language ( Java , Scala) and working knowledge of at least one scripting language ( Python , Bash). You stay up to date with the latest technology trends Team Player – No one is an Island! Our teams are highly collaborative. You believe that a team can accomplish more than one person working alone and continuous improvement is key to a team’s success. When you learn something new you are eager to share your findings with others Adaptable – Our products are evolving and as they evolve our technical solutions evolve as well. You easily pick up new technologies and use those technologies to tackle problems in creative and innovative ways. You use processes like Agile and Lean to enable software development. Driven – Figure it out! Creating and delivering valuable software is your top priority. You know your code will work because it is covered in tests. TDD and BDD are your main methods of achieving high test coverage. To deliver quickly, you believe automation is key. Who are you? HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics. Bachelor or Master’s Degree in Computer Science/Information Systems or equivalent 8 to 14 years of engineering experience Excellent teammate with the ability to work within a collaborative environment Creative, resourceful and innovative problem solver Great communication skills (including active listening and comprehending requirements) Who are we? HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel. Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies