Jobs
Interviews

145 Zookeeper Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

8 - 12 Lacs

Karnal, Haryana

On-site

Job description Title: Java Developer Experience: 3-5Yrs Location: Karnal, Haryana Job Type: Full-time Technology Stack (Must Have): Java 8+, Spring Boot, Hibernate, Git, Shell Scripting Good to Have : Jetty, Apache Maven, Apache Kafka, Apache Zookeeper, Docker, Kubernetes, SSL/TLS What we’re looking for: We are seeking a Java Developer who’ll be responsible for design, development, modification, debug and/or maintenance of software systems. As part of this opportunity, you will be working with a fast-paced and rapidly growing team of really sharp techies from IITs, NITs, and alike for Global clients. Job Responsibilities: Development, Basic testing, and Problem-solving Work closely with Senior Developers to understand the task level requirements and get the the desired job done Produce excellent quality of code, adhering to expected coding standards and industry best practices Follow approved life cycle methodologies, create design documents, and perform program coding and testing Should be able to think through possible pitfalls and challenges and get support from senior developers Assist in building/upgrading API Infrastructure Required Skills: Programming skills in Core Java, Spring Boot with over 3 yr of experience Experience in Design, Software development, and Testing Proficient knowledge of coding principles Good knowledge about REST / Web Services Problem-solving/ Troubleshooting skills Good Communication Skills Proficiency in version control software such as GIT Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Familiarity with Agile/Scrum methodologies About Us: We are a software development studio with expertise in App & Web Development. We work with clients worldwide and have already shipped over a dozen products for several multi-million dollar startups. The Core team is IIT Delhi’10 alumni with a combined experience of over 40 years of corporates and startups in New York, Bengaluru, and Delhi. Founders have experience of working with large global MNCs, as well as building startups with successful exits including a funded startup in the US. We're based in the small, beautiful, and developed city of Karnal in Haryana. More about us here: www.hcode.tech Why Work with Us: Strong growth opportunities Very well balanced work-life culture having sports options such as basketball, badminton, TT in premises; large nature feel office area with open working area Corporate Health Insurance for Self, Spouse, and Children Startup pace in a strict 5-day Mon-Fri working format No-Politics work environment built on merit and accountability Job Types: Regular / Permanent, Full-time Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Job Type: Full-time Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Karnal, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Present and Expected CTC? How many Years of Experience you have as a coder? Experience: Java: 3 years (Required) Location: Karnal, Haryana (Required) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

SRE is part of an application team matrixed to the Cloud Services Team to perform a specialized function that focuses on the automation of availability, performance, maintainability and optimization of business applications on the platform. To be effective in the position, a SRE must have strong AWS, Terraform and GitHub skills as the platform is 100% automated. All changes being applied to the environment must be automated with Terraform and checked into GitHub version control. A matrixed SRE will be provided the Reliability Engineering role in the accounts they are responsible for. This role includes the rights to perform all the necessary functions required to support the applications in the IaaS environment. An SRE is required to adhere to all Enterprise processes and controls (ie ChgMgt, Incident and Problem Mgmt, etc) and ensure alignment to Cloud standards and best practices. Ability to write and implement infrastructure as code and platform automation Experience implementing Infrastructure as Code Terraform Collaborate with Cloud Services and Application teams to deliver projects Deploy infrastructure as code (IaC) releases to QA, staging, and production environments Responsible for building the automation for any account customizations required by the application custom roles, policies, security groups, etc DevOps Engineer Should be having Minimum 5years of working experience especially as DevOps Engineer/SRE Should be working as IC role with very good communication skills Verbal & Written OS Knowledge Should have 3Years hands on working experience on Linux SCM Should have 3 years of hands on working experience in Git Preferably GitHub Enterprise Cloud Experience: Should have a thorough knowledge of AWS Certification is preferred CICD Tool 4 Years hands on working experience in Jenkins If not any other CICD tool EKS CICD Working experience with Jenkins and if not any other CICD tool for EKS. Jenkins Pipeline script hands on experience with pipeline script is preferred. Containers Minimum 1 Year hands-on working experience in Docker/Kubernetes. Preferred if candidate is certified CKA(Certified Kubernetes Administrator) Mulesoft Runtime Fabric Install configure Anypoint Runtime Fabric environment and deploy application on runtime fabric. Cloud Infra Provisioning Tool 2 Years hands on working experience in Terraform/ Terraform Enterprise/Cloud Formation Application Provisioning Tool 2 Years hands on working experience in Puppet/Ansible/Chef Data Components Should have good knowledge and Min 1 year of working experience with ELK, Kafka, Zookeeper HDF knowledge added advantage Tools Consul Vault Knowledge is added advantage Scripting Knowledge 3 years hands on working experience on any scripting language Shell/Python/Ruby etc Very good troubleshooting skills and should have hands on working experience in production deployments and Incidents. Mulesoft Knowledge Added advantage Java SpringBoot Knowledge Added advantage.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Through our dedicated associates, Conduent delivers mission-critical services and solutions on behalf of Fortune 100 companies and over 500 governments - creating exceptional outcomes for our clients and the millions of people who count on them. You have an opportunity to personally thrive, make a difference and be part of a culture where individuality is noticed and valued every day. The candidate will work on modernizing high-volume, large-scale, multi-tiered transaction processing production system to cloud. Working in Agile software development lifecycle methodology, he will analyze the requirements, provide solution and mentor junior developers. Will design and develop near real-time applications using various Java based cloud technologies. She / He will work as technical lead, providing solution for modernization effort. He / She will also analyze new requirements and provide estimates for new development, mentor team member and in charge for the technical delivery of the project / module. Very good verbal and written communication 10+ Years of experience in developing applications in Java Should have solid experience in modernizing monolith to cloud and micro services Good experience Spring MVC, Spring Boot, Angular, and Nodejs Experience in technologies like Kafka, JMS, Zookeeper, Spring MVC, Spring boot, Micro services, Azure, AWS, Mongo DB, Elastic, Kabana, Kafka, high volume Transaction Processing. Experience in designing and implementation role Exposure & implementation knowledge of IOT frameworks and MQQT Experience in designing and developing application from requirement / use cases to production Experience / Exposure to cloud technologies and deploying application is cloud environment and containers Additional Desired Skills: Creative problem solving skills Work collaboratively with other members of the project team to ensure timely delivery of high quality delivery enterprise applications Plan and estimate development work needed to implement assigned tasks Transform complex requirements into working, maintainable enterprise-level solutions Perform detailed application design as appropriate Author and maintain design and technical documentation necessary Provide leadership to other team members to deliver high quality systems on schedule Knowledge on Source code version (SVN) Tracking tools ( JIRA, Bugzilla etc) Participates in code reviews. Participates in software design meetings and analyzes user needs to determine technical requirements. Consults with end user to prototype, refine, tests, and debugs programs to meet needs. Conducts tasks and assignments as directed. Works under minimal supervision with some latitude for independent judgment. Conduent is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, creed, religion, ancestry, national origin, age, gender identity, gender expression, sex/gender, marital status, sexual orientation, physical or mental disability, medical condition, use of a guide dog or service animal, military/veteran status, citizenship status, basis of genetic information, or any other group protected by law. People with disabilities who need a reasonable accommodation to apply for or compete for employment with Conduent may request such accommodation(s) by submitting their request through this form that must be downloaded: click here to access or download the form. Complete the form and then email it as an attachment to FTADAAA@conduent.com. You may also click here to access Conduent's ADAAA Accommodation Policy. At Conduent we value the health and safety of our associates, their families and our community. For US applicants while we DO NOT require vaccination for most of our jobs, we DO require that you provide us with your vaccination status, where legally permissible. Providing this information is a requirement of your employment at Conduent.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Through our dedicated associates, Conduent delivers mission-critical services and solutions on behalf of Fortune 100 companies and over 500 governments - creating exceptional outcomes for our clients and the millions of people who count on them. You have an opportunity to personally thrive, make a difference and be part of a culture where individuality is noticed and valued every day. The candidate will work on modernizing high-volume, large-scale, multi-tiered transaction processing production system to cloud. Working in Agile software development lifecycle methodology, he will analyze the requirements, provide solution and mentor junior developers. Will design and develop near real-time applications using various Java based cloud technologies. She / He will work as technical lead, providing solution for modernization effort. He / She will also analyze new requirements and provide estimates for new development, mentor team member and in charge for the technical delivery of the project / module. Very good verbal and written communication 10+ Years of experience in developing applications in Java Should have solid experience in modernizing monolith to cloud and micro services Good experience Spring MVC, Spring Boot, Angular, and Nodejs Experience in technologies like Kafka, JMS, Zookeeper, Spring MVC, Spring boot, Micro services, Azure, AWS, Mongo DB, Elastic, Kabana, Kafka, high volume Transaction Processing. Experience in designing and implementation role Exposure & implementation knowledge of IOT frameworks and MQQT Experience in designing and developing application from requirement / use cases to production Experience / Exposure to cloud technologies and deploying application is cloud environment and containers Additional Desired Skills Creative problem solving skills Work collaboratively with other members of the project team to ensure timely delivery of high quality delivery enterprise applications Plan and estimate development work needed to implement assigned tasks Transform complex requirements into working, maintainable enterprise-level solutions Perform detailed application design as appropriate Author and maintain design and technical documentation necessary Provide leadership to other team members to deliver high quality systems on schedule Knowledge on Source code version (SVN) Tracking tools ( JIRA, Bugzilla etc) Participates in code reviews. Participates in software design meetings and analyzes user needs to determine technical requirements. Consults with end user to prototype, refine, tests, and debugs programs to meet needs. Conducts tasks and assignments as directed. Works under minimal supervision with some latitude for independent judgment. Conduent is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, creed, religion, ancestry, national origin, age, gender identity, gender expression, sex/gender, marital status, sexual orientation, physical or mental disability, medical condition, use of a guide dog or service animal, military/veteran status, citizenship status, basis of genetic information, or any other group protected by law. People with disabilities who need a reasonable accommodation to apply for or compete for employment with Conduent may request such accommodation(s) by submitting their request through this form that must be downloaded: click here to access or download the form. Complete the form and then email it as an attachment to FTADAAA@conduent.com. You may also click here to access Conduent's ADAAA Accommodation Policy. At Conduent we value the health and safety of our associates, their families and our community. For US applicants while we DO NOT require vaccination for most of our jobs, we DO require that you provide us with your vaccination status, where legally permissible. Providing this information is a requirement of your employment at Conduent.

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

On-site

Job Summary: Key Responsibilities Manage and maintain Kafka clusters in non prod and prod environments Set up and configure Kafka clusters with Zookeeper or Kraft Implement and manage various Kafka connectors Develop and maintain ksqlDB streams and tables Handle Kafka operations on Kubernetes Create and manage Docker images when not available in public registries Write and manage YAML files for configuration Implement CICD pipelines using GitLab Monitor Kafka environments using tools like Datadog Automate infrastructure management using Terraform Manage AWS services including security groups load balancers Route 53 networking S3 and ECR Troubleshoot and resolve Kafka and infrastructure related issues Adhere to incident change and problem management processes Skills and Abilities Strong knowledge of Kafka architecture Handson experience with Kafka cluster setup using Zookeeper or Kraft Proficiency in working with Kafka connectors and ksqlDBFlink Experience with Kubernetes for Kafka deployment and management Ability to create Docker images and work with containerized environments Proficiency in writing YAML configurations Strong troubleshooting and debugging skills Experience with monitoring tools like Datadog Familiarity with GitLab CICD pipelines Proficiency in Terraform for infrastructure automation Strong understanding of AWS cloud services Experience with IT service management processes incident change and problem management Experience 10 years of experience managing Kafka clusters in production environments Experience setting up and maintaining Kafka on Kubernetes Handson experience with AWS cloud infrastructure Experience with CICD pipelines infrastructure automation and monitoring tools Certifications confluent AWS

Posted 1 month ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Unicommerce: Unicommerce is a leading e-commerce enablement SaaS platform that powers end-to-end e-commerce operations for brands, marketplaces, and logistics providers. Its full-stack solutions streamline both pre-purchase and post-purchase processes, driving efficiency and growth. Convertway by Unicommerce is a marketing automation platform that enhances customer engagement. It helps brands increase sales by capturing visitor data, automating WhatsApp and SMS communications, running campaigns, and providing chatbot support. Uniware is an advanced order processing platform that optimizes operations after an order is placed. It enables seamless inventory management, multi-channel order processing, returns handling, and payment reconciliation. Uniware offers seller, order, warehouse, and inventory management, along with omnichannel solutions. Shipway by Unicommerce is a logistics platform that reduces shipping costs through courier aggregation and automation. Its key solutions include smart courier allocation, order tracking, and return automation. For more information, visit https://unicommerce.com Follow Unicommerce on LinkedIn Instagram and Twitter. Stay updated with our current open roles across functions and visit our careers page. Job Description: A technology enthusiast who is comfortable being part of a small, highly visible, tight-knit team and can collaborate closely with team leads and architects to accomplish your goals. You own your part of the product line, your staffing decisions, prioritization and the operational excellence of the platform. Responsibilities: Help define Technical Roadmap and own the entire product delivery end to end. Work very closely with various business stakeholders to drive the execution of multiple business plans and technologies. Improve, optimize and identify opportunities for efficient software development processes. Hire, Develop and Retain a strong team of engineers. Keep abreast of the changes in the industry and champion new technologies and development processes within the team. Apply If You have: Graduation/Post Graduation degree in Computer Science (IITs, IIITs and NITs preferred) 8+ years of strong experience in JAVA(Spring/Hibernate/JPA/REST), with good exposure to MySQL Experience with Tomcat, Jetty, Node, ActiveMQ, Kafka, Zookeeper, Hazelcast, MySQL, MongoDB, Bootstrap, ReactJS, AWS EC2, S3, ELB, Java, JS, Python Experience working with agile teams and making rapid decisions in dynamic and disruptive environment. 3+ years of leading and managing a team consisting of backend, frontend and QA. Hands on writing and reviewing code Exceptional design and architectural skills. Strong communication skills.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description 6-8 years’ experience in designing, implementing mobile based operator round/ Inspection round /Logbook solution. Good understanding of activities carried out by Operator during field rounds and work processes in Refinery/Petrochemical/Mining domain. Prior experience of Implementing Operator/Filed round solutions like Honeywell Forge Inspection round (movilizer), Aveva Mobile operator (IntelaTrac AVEVA Mobile Operator Rounds), j5 Operator Rounds & Routine Duties, GE APM Rounds Pro - Operator Rounds solution etc. Good Expertise in SQL and able to write SQL View/Stored procedure Good understanding of Historian solutions and Operator logbook solutions. Familiar with Agile development methodology (development lifecycle) Experience in deployment of Operator logbook solutions like Dynamo operation suite, j5 Operations logbook, Exaquantum Electronic Logbook etc., will be added advantage knowledge on deployment, Maintenance and Monitoring of the following will be added advantage. Linux Server administration (RedHat, CentOS, Ubuntu), Shell, Perl and Networking (Load Balancer-NGINX and HA Proxy) Knowledge of Clustering is required Apache Tomcat, Cassandra, Kafka, Zookeeper Docker Registry, Nexus, Expert in Docker and Kubernetes Responsibilities Independently execute the technical delivery of the project right from design to closure. Develop design documents- FDS, DDS, test procedures and training manuals. Translate functional requirements to technical requirements and work with cross functional team of infrastructure, Integration, Dashboard, and reports to ensure smooth execution. Implement operator rounds/Inspection rounds/ operations Logbook solution Work under lead supervision to gather requirements and build required solutions (Design, Configure, test, and deploy solutions) as per customer RFQ Manage customer expectations and ensure delivery of good quality and on-time delivery. Address customer issues on time by escalating to the right internal stakeholders Follow the Company defined standard practices and methods Qualifications BE/B.Tech in Chemical Engineering/Instrumentation/Computer science Engineering. Experience: 6+ years with a minimum of 3 years of experience in deployment of Operator round/Inspection round solutions like Honeywell Forge Inspection round (movilizer), Aveva Mobile operator (IntelaTrac AVEVA Mobile Operator Rounds), j5 Operator Rounds & Routine Duties, GE APM Rounds Pro - Operator Rounds solution etc. About Us Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable. Show more Show less

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" Software Quality Assurance Director What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelors or Masters degree in Computer Science, Engineering, or related field Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Company, Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters We’re looking for a Java Developer to join our dynamic and innovative team, working on a cutting-edge product. You’ll be part of a cross-functional, self-organizing team that values autonomy, agility, and continuous improvement. This is a highly collaborative environment where you’ll work closely with development leads, product managers, and stakeholders to deliver high-quality solutions. Work Location : Gurugram, Haryana and Noida, UP Key Responsibilities: Collaborate within an agile team to design, develop, and maintain robust Java applications Break down complex user stories and technical requirements into actionable tasks Write clean, efficient, and testable code, along with unit and integration tests Document technical designs and contribute to knowledge-sharing across teams Participate in code reviews and ongoing refactoring efforts to improve code quality Provide Level 3 production support when needed Work closely with both onshore and offshore team members Required Skills & Experience: 6+ years of hands-on experience in Java software development Proficiency in Java 17 and solid understanding of object-oriented programming Experience with Spring Boot , REST APIs , microservices , and WebSockets Strong knowledge of unit testing , code refactoring , and best practices in software development Familiarity with tools like Maven , Git , and Jenkins Experience working with Apache Kafka , Zookeeper , and PostgreSQL Understanding of YAML configuration and Hibernate ORM Excellent problem-solving and communication skills Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Engineer Overview We are into development of Payment Solution Payment solution can be used in the store, inside your application or on-line via the browser We are looking for Lead Engineers who can lead development of microservices based Enterprise applications using Java J2EE stack Also, development of Portals which would be either used by customer care, end user, customer representatives etc. We are looking for you, if your answer to following question is YES Are you interested to next generation Payment solutions? Have you developed and led a complex enterprise application? Are you skilled on the latest J2EE frameworks? Role Technically lead project through all stages of the project life cycle, including requirement understanding, estimation, design, development, acceptance testing Work closely with Project Architect, BA and/or customer and come up with high level and low level design Develop critical components whenever required and create proof of concepts for new/unknown use cases/ideas Conduct code reviews, is responsible for overall code quality, Coach and mentor less experienced team members Write and review Design Documentation in conjunction with Technical writer, Do SCM Operations – branching, merging, tagging, conflict resolution Study upcoming technologies and identify how they can be used to improve existing solution Comply with organizations processes. Policies and protects organization’s Intellectual property. Also, participate in organization level process improvement and knowledge sharing All About You Essential knowledge, skills & attributes Hands on experience and expert on working with core Java, Spring Boot, Spring (MVC, IOC, AOP, Security), SQL, RDBMS (Oracle and PostGRES), NoSQL (Cassandra), Web-services (JSON and SOAP), Kafka, Zookeeper Hands on experience of developing microservice application & deploying them on any one of the public cloud like Google, AWS, Azure Hands on experience of using Eclipse/My Eclipse IDE, UML tools (MS Visio, PlantUML) Hands on experience of writing Junit test cases, working with Maven/Ant/Gradle, GIT Experience of working with Agile methodologies. Personal attributes are strong logical and Analytical Skills, design skills, should be able to articulate and present his/her thoughts very clearly and precisely in English (written and verbal) Knowledge of Design Patterns Knowledge of Security concepts (E.g. authentication, authorization, confidentiality etc.) and protocols, their usage in enterprise application Additional/Desirable Capabilities Experience of working in Payments application Domain Hands on experience of working with tools like Mockito, JBehave, Jenkins, Bamboo, Confluence, Rally Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-238412 Show more Show less

Posted 1 month ago

Apply

5.0 years

15 Lacs

India

On-site

Key Responsibilities: Architect, design, and optimize enterprise-grade NiFi data flows for large-scale ingestion, transformation, and routing. Manage Kafka clusters at scale (multi-node, multi-datacenter setups), ensuring high availability, fault tolerance, and maximum throughput. Create custom NiFi processors and develop advanced flow templates and best practices. Handle advanced Kafka configurations — partitioning, replication, producer tuning, consumer optimization, rebalancing, etc. Implement stream processing using Kafka Streams and manage Kafka Connect integrations with external systems (databases, APIs, cloud storage). Design secure pipelines with end-to-end encryption, authentication (SSL/SASL), and RBAC for both NiFi and Kafka. Proactively monitor and troubleshoot performance bottlenecks in real-time streaming environments. Collaborate with infrastructure teams for scaling, backup, and disaster recovery planning for NiFi/Kafka. Mentor junior engineers and enforce best practices for data flow and streaming architectures. Required Skills and Qualifications: 5+ years of hands-on production experience with Apache NiFi and Apache Kafka . Deep understanding of NiFi architecture (flow file repository, provenance, state management, backpressure handling). Mastery over Kafka internals — brokers, producers/consumers, Zookeeper (or KRaft mode), offsets, ISR, topic configurations. Strong experience with Kafka Connect , Kafka Streams , Schema Registry , and data serialization formats (Avro, Protobuf, JSON). Expertise in tuning NiFi and Kafka for ultra-low latency and high throughput . Strong scripting and automation skills (Shell, Python, Groovy, etc.). Experience with monitoring tools : Prometheus, Grafana, Confluent Control Center, NiFi Registry, NiFi Monitoring dashboards. Solid knowledge of security best practices in data streaming (encryption, access control, secret management). Hands-on experience deploying on cloud platforms (AWS MSK, Azure Event Hubs, GCP Pub/Sub with Kafka connectors). Bachelor's or Master’s degree in Computer Science, Data Engineering, or equivalent field. Preferred (Bonus) Skills: Experience with containerization and orchestration (Docker, Kubernetes, Helm). Knowledge of stream processing frameworks like Apache Flink or Spark Streaming. Contributions to open-source NiFi/Kafka projects (a huge plus!). Soft Skills: Analytical thinker with exceptional troubleshooting skills. Ability to architect solutions under tight deadlines. Leadership qualities for guiding and mentoring engineering teams. Excellent communication and documentation skills. pls send your resume on hr@rrmgt.in or call me on 9081819473. Job Type: Full-time Pay: From ₹1,500,000.00 per year Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Are you ready to make your mark with a true industry disruptor? ZineOne, a subsidiary of Session AI, the pioneer of in-session marketing, is looking to add talented team members to help us grow into the premier revenue tool for e-commerce. We work with some of the leading brands nationwide and we innovate how brands connect with and convert customers. Job Description This position offers a hands-on, technical opportunity as a vital member of the Site Reliability Engineering Group. Our SRE team is dedicated to ensuring that our Cloud platform operates seamlessly, efficiently, and reliably at scale. The ideal candidate will bring over five years of experience managing cloud-based Big Data solutions, with a strong commitment to resolving operational challenges through automation and sophisticated software tools. Candidates must uphold a high standard of excellence and possess robust communication skills, both written and verbal. A strong customer focus and deep technical expertise in areas such as Linux, automation, application performance, databases, load balancers, networks, and storage systems are essential. Key Responsibilities: As a Session AI SRE, you will: Design and implement solutions that enhance the availability, performance, and stability of our systems, services, and products Develop, automate, and maintain infrastructure as code for provisioning environments in AWS, Azure, and GCP Deploy modern automated solutions that enable automatic scaling of the core platform and features in the cloud Apply cybersecurity best practices to safeguard our production infrastructure Collaborate on DevOps automation, continuous integration, test automation, and continuous delivery for the Session AI platform and its new features Manage data engineering tasks to ensure accurate and efficient data integration into our platform and outbound systems Utilize expertise in DevOps best practices, shell scripting, Python, Java, and other programming languages, while continually exploring new technologies for automation solutions Design and implement monitoring tools for service health, including fault detection, alerting, and recovery systems Oversee business continuity and disaster recovery operations Create and maintain operational documentation, focusing on reducing operational costs and enhancing procedures Demonstrate a continuous learning attitude with a commitment to exploring emerging technologies Preferred Skills: Experience with cloud platforms like AWS, Azure, and GCP, including their management consoles and CLI Proficiency in building and maintaining infrastructure on: AWS using services such as EC2, S3, ELB, VPC, CloudFront, Glue, Athena, etc Azure using services such as Azure VMs, Blob Storage, Azure Functions, Virtual Networks, Azure Active Directory, Azure SQL Database, etc GCP using services such as Compute Engine, Cloud Storage, Cloud Functions, VPC, Cloud IAM, BigQuery, etc Expertise in Linux system administration and performance tuning Strong programming skills in Python, Bash, and NodeJS In-depth knowledge of container technologies like Docker and Kubernetes Experience with real-time, big data platforms including architectures like HDFS/Hbase, Zookeeper, and Kafka Familiarity with central logging systems such as ELK (Elasticsearch, LogStash, Kibana) Competence in implementing monitoring solutions using tools like Grafana, Telegraf, and Influx Benefits Comparable salary package and stock options Opportunity for continuous learning Fully sponsored EAP services Excellent work culture Opportunity to be an integral part of our growth story and grow with our company Health insurance for employees and dependents Flexible work hours Remote-friendly company Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What you'll do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview: The ideal candidate will have a solid foundation in Java programming, with additional exposure to Scala and Python being a plus. This role requires an understanding of data modeling concepts (such as with UML), and experience with Terraform-based infrastructure deployment (such as in AWS). Familiarity with messaging systems such as Kafka and IBM MQ, streaming processing technologies such as Flink, Flume, Spark or Ray, and knowledge of Spring and Spring Boot frameworks, is important. Experience with caching systems, in-memory databases like RocksDB or ElastiCache, and distributed caching is beneficial. The role also involves working with distributed system design, as well as understanding synchronous and asynchronous messaging principles and design. Key Responsibilities: Develop and maintain Java applications, with a preference for Java versions 11, 17, and 21. Utilize Scala and Python for specific project requirements as needed. Design and implement data models using UML concepts. Deploy and manage infrastructure using Terraform in AWS environments. Work with messaging systems, including Kafka and IBM MQ, to ensure efficient data communication. Implement solutions using Spring and Spring Boot frameworks. Manage caching systems and in-memory databases, ensuring optimal performance. Contribute to the design and development of distributed systems, leveraging technologies like Zookeeper and Kafka. Apply synchronous and asynchronous messaging principles in system design. Utilize serialization formats such as Protobuf, Avro, and FlatBuffer as applicable. Work with data formats like Parquet and Iceberg, and understand data warehouse and lakehouse concepts. Candidate Profile: Strong fundamentals in Java programming, with exposure to Scala and Python as a bonus. Fast learner with the ability to adapt to new technologies and methodologies. Creative thinker, open to innovative solutions beyond conventional approaches. Proactive and independent, capable of taking initiative and driving projects forward. Strong communication skills, able to collaborate effectively with cross-functional teams. Preferred Qualifications: Experience with Java versions 11, 17, and 21. Familiarity with Protobuf, Avro, and FlatBuffer serialization formats. Understanding of Parquet and Iceberg table formats. Knowledge of data warehouse and lakehouse concepts Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 10+ Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What You'll Do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 month ago

Apply

0 years

9 Lacs

Bengaluru

On-site

Associate - Production Support Engineer Job ID: R0388741 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-12 Location: Bangalore Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your skills and experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste 5- 7 years of Confluent Kafka platform experience Administration of Confluent Kafka Platform on Prem and in the Cloud Knowledge of Confluent Kafka operations Administration of Topics, Partitions, Consumer groups and kSQL queries to maintain optimal performance. Knowledge of Kafka ecosystem including Kafka Brokers, Zookeeper/KRaft, kSQL, Connectors, Schema Registry, Control Center, and platform interoperability. Knowledge of Kafka Cluster Linking and replication Experience with administering Multi Regional Confluent Clusters System Performance: Knowledge of performance tuning of messaging systems and clients to meet application requirements. Operating Systems: RedHat Linux Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Pune

Work from Office

What You'll Do Job Description: You will Provide 24/7 administrative support (on-prime and Atlas Cloud) on MongoDB Clusters, Postgres & Snowflake Provide support for on-prime and Confluent Cloud Kafka Clusters You will Review database designs to ensure all technical and our requirements are met. Perform database Optimization, testing to ensure Service level agreements are met. You will support during system implementation and in production Provide Support for Snowflake Administrative Tasks (Data Pipeline, Object creation, Access) Participate in Weekdays and Weekend Oncall Rotation to support Products running on Mongo, SQL, Kafka & Snowflake, and other RDBMS Systems. This roles does not have any managerial responsibilities. Its an individual contributor role. You will report to Sr. Manager Reliability Engineering. What Your Responsibilities Will Be 8+ years of experience in Managing MongoDB on-prime and Atlas Cloud Be an part of the database team in developing next-generation database systems. Provide services in administration and performance monitoring of database related systems. Develop system administration standards and procedures to maintain practices. Support backup and recovery strategies. Provide in the creative process to improving architectural designs and implement new architectures Expertise in delivering efficiency and cost effectiveness. Monitor and support capacity planning and analysis. Monitor performance, troubleshoot issues and proactively tune database and workloads. Sound knowledge Terraform, Grafana and Manage Infra as a code using Terraform & Gitlab. Ability to work remotely. What You'll Need to be Successful Working knowledge of MongoDB (6.0 or above). Experience with Sharding and Replica sets. Working knowledge of database installation, setup, creation, and maintenance processes. Working knowledge on Change Streams and Mongo ETL's to replicate live changes to downstream Analytics systems. Experience running MongoDB in containerized environment (EKS clusters) Support Reliability Engineering task for all other database platform (SQL, MYSQL, Postgres, Snowflake, Kafka). Experience with Cloud or Ops Manager (a plus) Understand Networking components on aws and gcp cloud. Technical knowledge of Backup/ Recovery. Disaster Recovery and High Availability techniques Strong technical knowledge in writing shell scripts used to support database administration. Good Understanding of Kafka and Snowflake Administration. Good Understanding of Debezium, Kafka, Zookeeper and Snowflake is plus. Automate Database Routine tasks Independently with shell, python and other languages.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your Skills And Experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description Summary of This Role Collaborates with clients and other functional areas SMEs in the design of IT Roadmaps items to illustrate architectural complexities and interactions of information systems. Analyzes, refines and documents the business requirements of the client. Analyzes existing systems to detect critical deficiencies and recommend solutions for improvement. Plans and designs information systems and implements updates within scope of established guidelines and objectives. Researches new technological advances to assess current practices for compliance with systems requirements. Recommends solutions to address current system needs, process improvements and controls. . Makes recommendations for future information system needs. Provides technical architecture and support across applications and guidance to other functional areas to define software/hardware requirements and in planning and delivering infrastructure. Analyzes infrastructure and capacity planning. Employs a thorough knowledge of required procedures, methodologies and/or application standards, including Payment Card Industry (PCI) and security related compliance to write or modify software programs to include analysis, writing specifications and code, program installation and documentation for use with multiple application/user database systems. Maintains information systems by configuring software and hardware, tracking errors and data movement, and troubleshooting. Collaborate with engineers across the Core and Team to create technical designs, develop, test and solve complex problems that drive the solution from initial concept to production. Contribute to our Automated build, deploy and test processes for each solution. Work in an iterative manner that fits well with the development practices and pace within the team, with focus on a fail fast approach. Demo your work for colleagues and members of the business team. Conduct research on new and interesting technologies that help to progress our products and platforms. Create mechanisms/architectures that enable rapid recovery, repair and cleanup of existing solutions with good understanding of fault tolerance and failure domains. Identify opportunities to deliver self service capability for the most common infrastructure and application management tasks. Create automated tests that easily plug into our automated code pipeline. Provide deep and detailed levels of monitoring across all levels of the application. Attend sessions, seminars and be an evangelist for the latest technology. Lead , help mentor other engineers and technical analysts. Plan sprints within your project team to keep yourself and the team moving forward. What Are We Looking For in This Role? Minimum Qualifications MCA, B. Tech. or B.E. (four year college degree) or equivalent. Typically minimum of 8 years - Professional Experience In Coding, Designing, Developing And Analyzing Data. Typically has an advanced knowledge and use of one or more back end languages / technologies and a moderate understanding of the other corresponding end language / technology from the following but not limited to; two or more modern programming languages used in the enterprise, experience working with various APIs, external Services, experience with both relational and NoSQL Databases. Preferred Qualifications Experience 8- 10 Years B.Tech / Master's Degree ( Regular) What Are Our Desired Skills and Capabilities? Supervision - Determines methods and procedures on new assignments and may coordinate activities of other personnel (Team Lead). Experience of working on SOA Architecture, Microservices Architecture, Event drives and serverless architectures. Good Knowledge of JAVA/JEE Design Patterns, Enterprise Integration Design Patterns ,SOA Design Patterns, MicroServices Design Patterns. Experience of working on RestFull services, SOAP WebServices, gRPC , Async & streaming technologies. Experience of working on Java 1.8 +, Spring 4.x +, Spring Boot, Spring data, SpringREST, Spring MVC, Spring-integration (i.e. no EJB :), Tomcat 8.5.x (embedded version), JUnit + Spring-test, application stack Experience of working on ORM / Persistence frameworks or technologies like Hibernate , MyBatis, iBatis Experience on designing and developing Fault Tolerant , HA systems Good hands on experience on AWS stack and services like S3,EC2, KMS, EKS, MSK, Lambda, Iam, RDS, Dynamo,Cloudwatch Good hands on experience on Cloud Native projects like Prometheus, Grafana, Argo, Harbour, Helm, Istio, K8S etc Good experience of working on Agile development model and Automation Test driven development (TDD) methodologies. Good experience of using container technology to build out an automated platform architecture that allows for seamless deployment between on-premise and external cloud environments Good experience of leveraging open technology such as Docker, Kubernetes, Terraform, Bash, Javascript, Python, Git, Jenkins, Linux, HAProxy, AWS Cloud, ELK, Java, Kafka, MongoDB, Zookeeper, and AWS Amazon Web Service (EC2 Container Service, Cloud Formation, Elastic Load Balancer, Auto scaling Group). Good experience of Integrating systems using a wide variety of protocols like REST, SOAP, MQ, TCP/IP, JSON and others Good experience of designing and building automated code deployment systems that simplify development work and make our work more consistent and predictable. Exhibit a deep understanding of server virtualization, networking and storage ensuring that the solution scales and performs with high availability and uptime. Soft Skills : Is Adaptable, Result oriented, portrays a positive attitude, Flexible & Multi Task orientated. Is able to accept guidance and is a good listener. Has Good oral and written communication skills Has ability to understand business needs and translate them into technology solutions. Has strong research and problem resolution skills Is a strong Team Player, with good time management, interpersonal & presentation skills. Has strong customer focus & understands external and internal customer expectations Is able to articulate Technical solutions in language understood by business users. Has a go getter attitude to handle challenging development tasks. Can drive Change and has a good Innovation track record. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

What is your Role? You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that often interacts with customers on various aspects related to technical support for deployed system. You will be deputed at customer premises to assist customers for issues related to System and Hadoop administration. You will Interact with QA and Engineering team to co-ordinate issue resolution within the promised SLA to customer. What will you do? Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem. Installing Linux Operating System and Networking. Writing Unix SHELL/Ansible Scripting for automation. Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc. Taking care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time. Maintaining HBASE Clusters and capacity planning. Maintaining SOLR Cluster and capacity planning. Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected. Manage KVM Virtualization environment. Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Key Responsibilities Lead the deployment, configuration, and ongoing administration of Hortonworks, Cloudera, and Apache Hadoop/Spark ecosystems. Maintain and monitor core components of the Hadoop ecosystem including Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, and HBASE. Take charge of the day-to-day running of Hadoop clusters using tools like Ambari, Cloudera Manager, or other monitoring tools, ensuring continuous availability and optimal performance. Manage and provide expertise in HBASE Clusters and SOLR Clusters, including capacity planning and performance tuning. Perform installation, configuration, and troubleshooting of Linux Operating Systems and network components relevant to big data environments. Develop and implement automation scripts using Unix SHELL/Ansible Scripting to streamline operational tasks and improve efficiency. Manage and maintain KVM Virtualization environments. Oversee clusters, storage solutions, backup strategies, and disaster recovery plans for big data infrastructure. Implement and manage comprehensive monitoring tools to proactively identify and address system anomalies and performance bottlenecks. Work closely with database teams, network teams, and application teams to ensure high availability and expected performance of all big data applications. Interact directly with customers at their premises to provide technical support and resolve issues related to System and Hadoop administration. Coordinate closely with internal QA and Engineering teams to facilitate issue resolution within promised Skills & Qualifications : Experience : 5-8 years of strong individual contributor experience as a DevOps, System, and/or Hadoop Domain Expertise : Proficient in Linux Administration. Extensive experience with Hadoop Infrastructure and Administration. Strong knowledge and experience with SOLR. Proficiency in Configuration Management tools, particularly Data Ecosystem Components : Must have hands-on experience and strong knowledge of managing and maintaining : Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem deployments. Core components like Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE. Cluster management tools such as Ambari and Cloudera : Strong scripting skills in one or more of Perl, Python, or Management : Strong experience working with clusters, storage solutions, backup strategies, database management systems, monitoring tools, and disaster recovery : Experience managing KVM Virtualization : Excellent analytical and problem-solving skills, with a methodical approach to debugging complex : Strong communication skills (verbal and written) with the ability to interact effectively with technical teams and : Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent relevant work experience. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

About The Job Job Description : We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles And Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills : Education : B.Tech in computer engineering, Information Technology, or related field. Experience GraphDB experience is must 5+ years of experience in a Technical Support Role on Data based Software Product at least L3 level. Linux Expertise : 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases : 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise : 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing : 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation : 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration : 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools : Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing : Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies : Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice To Have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies