Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineering Specialist at BT, you will have the opportunity to be part of a team that is shaping the future of communication services and defining how people interact with these services. Your role will involve fulfilling various requirements in Voice platforms, ensuring timely delivery and integration with other platform components. Your responsibilities will include deploying infrastructure, networking, and software packages, as well as automating deployments. You will implement up-to-date security practices and manage issue diagnosis and resolution across infrastructure, software, and networking areas. Collaboration with development, design, ops, and test teams will be essential to ensure the reliable delivery of services. To excel in this role, you should possess in-depth knowledge of Linux, server management, and issue diagnosis, along with hands-on experience. Proficiency in TCP/IP, HTTP, SIP, DNS, and Linux tooling for debugging is required. Additionally, you should be comfortable with Bash/Python scripting, have a strong understanding of Git, and experience in automation through tools like Ansible and Terraform. Your expertise should also include a solid background in cloud technologies, preferably Azure, and familiarity with container technologies such as Docker, Kubernetes, and GitOps tooling like FluxCD/ArgoCD. Exposure to CI/CD frameworks, observability tooling, RDBMS, NoSQL databases, service discovery, message queues, and Agile methodologies will be beneficial. At BT, we value inclusivity, safety, integrity, and customer-centricity. Our leadership standards emphasize building trust, owning outcomes, delivering value to customers, and demonstrating a growth mindset. We are committed to building diverse, future-ready teams where individuals can thrive and contribute positively. BT, as part of BT Group, plays a vital role in connecting people, businesses, and public services. We embrace diversity and inclusion in everything we do, reflecting our core values of being Personal, Simple, and Brilliant. Join us in making a difference through digital transformation, and be part of a team that empowers lives and businesses through innovative communication solutions.,
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Hands-on coder Collaborate with Enterprise/Solution Architects, Business Analyst to deliver high quality APIs to enable reusability of APIs with wider HSBC group systems. Provide professional consultancy/support timely for application teams’ queries/requests Ensure the technical design/code structure is technical coherent, future proof and compliance with technology standards and regulatory obligations. Person is required to work with people working in Angular, React JS, Java, Cobol & Micro services experienced developers. Requirements To be successful in this role, you should meet the following requirements: Solid and proficient skills in Java, Spring Framework, Micro Services, RAML, OAS 3 specification. Strong foundation in Restful design practices. Experience in working with API management platform (e.g. Mule gateway, Zookeeper,Kong) Experience in modelling data in JSON. Experience in Scrum and Agile ways of working. Knowledge of DevOps tooling (e.g. Jenkins, Git, Maven) Experience in Unit Testing, Data Mockup and Automation Test Strong communication, analytical, design and problem solving skills Source code scanning and security (e.g. checkmarx) Experience on performance tuning. Experience in JWT and SAML based token Authentication. Cloud experience is a plus (e.g. Docker, Kubernetes, PCF (Pivotal Cloud Foundry), GCP, AWS, Ali Cloud) Spring Reactive is a plus No-SQL DB experience is a plus (e.g. Mongo DB) Knowledge or experience of any ESB tools (e.g. IIB, Mule, MQ, SpringBoot) Willing to learn/explore various technologies especially in Integration domain Excellent team player with ability to work under minimal supervision You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description This is a full-time on-site role for a Kafka Developer, located in Kochi. The Kafka Developer will be responsible for designing, developing, and maintaining Kafka-based applications. Required Skills 3-5 years of experience working with Apache Kafka in a production environment. Strong knowledge of Kafka architecture, including brokers, topics, partitions, and replicas. Experience with Kafka security, including SSL, SASL, and ACLs. Proficiency in configuring, deploying, and managing Kafka clusters in cloud and on-premises environments. Experience with Kafka stream processing using tools like Kafka Streams, KSQL, or Apache Flink. Solid understanding of distributed systems, data streaming, and messaging patterns. Proficiency in Java, Scala, or Python for Kafka-related development tasks. Familiarity with DevOps practices, including CI/CD pipelines, monitoring, and logging. Experience with tools like Zookeeper, Schema Registry, and Kafka Connect. Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed environment. Excellent communication and collaboration skills to work effectively with cross-functional teams and stakeholders. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Kafka Developer or similar role. Strong understanding of distributed systems and real-time data processing.
Posted 3 weeks ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
AWS/Azure/GCP, Linux, shell scripting, IaaC, Docker, Kubernetes, Mongo, MySQL, Solr, Jenkins, Github, Automation, TCP / HTTP network protocols A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. Ensure high availability of the infrastructure, administration, and overall support. Strong analytical skills and troubleshooting/problem solving skills - root cause identification and pro-active service improvement, staying up to date on technologies and best practices. Team and Task management with tools like JIRA adhering to SLAs. A Clear understanding of HTTP / Network protocol concepts, designs & operations - TCP dump, Cookies, Sessions, Headers, Client Server Architecture. More than 5+ years of working experience in AWS/Azure/GCP Cloud Platform. Core strength in Linux and Azure infrastructure provisioning including VNet, Subnet, Gateway, VM, Security groups, MySQL, Blob Storage, Azure Cache, AKS Cluster etc. Expertise with automating Infrastructure as a code using Terraform, Packer, Ansible, Shell Scripting and Azure DevOps. Expertise with patch management, APM tools like AppDynamics, Instana for monitoring and alerting. Knowledge in technologies including Apache Solr, MySQL, Mongo, Zookeeper, RabbitMQ, Pentaho etc. Knowledge with Cloud platform including AWS and GCP are added advantage. Ability to identify and automate recurring tasks for better productivity. Ability to understand, implement industry standard security solutions. Experience in implementing Auto scaling, DR, HA, Multi-region with best practices is added advantage. Ability to work under pressure, managing expectations from various key stakeholders. Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Hadoop/Cloudera Admin Experience: 5 to 8 Years Only Location: Bangalore/Hyderabad/Chennai/Mumbai/Pune Hybrid Mode Notice Period: Immediate to 30 days max Full-time employment with LTIMindtree(Direct Payroll) Primary Skills: Build Cloudera Cluster, migrate data, performance tuning, pre/post platform support for application migration Secondary Skill: Cluster management & monitoring via Cloudera, Kerberos, Hive, Impala, Kafka, Storm, Spark, HDFS, HBase, Oozie, Sqoop, Flume, Zookeeper, Apache Knox, Apache Sentry Thanks & Regards, Prabal Pandey Prabal.Pandey@alphacom.in
Posted 3 weeks ago
0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility Windchill Build/Infra Manager Experience: 6-10 Years Skills Install, Configure and Deploy Windchill Components on instance(s). Should have experience on Windchill advanced deployments like cluster deployments, load balancers. Should be well versed with Windchill Architecture. Experience in windchill system admin activities: Rehost, Replica setup, SOLR, Zookeeper, certificate management, Upgrade. Experience in Infra activities: Build & Deployment, Upgrade, Data Loading, Server Maintenance, Network and Database Administration. Should have experience on cloud deployments. Should have experience in CICD, Build management and scripting. Should have experience in both Windows and other OS like Linux, CentOS, UNIX, etc. Should have experience in PLM monitoring like Checking System Health and doing Preventive Maintenance activities. Should have extensive experience on AWS and Azure DevOps Should have Good Communication skill, agility to work with Cross Functional team across Time Zones.
Posted 3 weeks ago
0 years
6 - 7 Lacs
Pune
On-site
Technical expertise in the design of Industrial solutions utilizing Microsoft Azure, VMware, AWS, or similar cloud services. Competent in Linux server administration, including RedHat, CentOS, and Ubuntu. Extensive hands-on experience with Windows Server environments. Practical experience with technologies such as Cassendra, Zookeeper, Minio, and OpenShift and DevOps methodologies. Proficient in utilizing Docker Registry, Nexus, and Kubernetes for container orchestration and management. Experience with managing Active Directory, configuring DNS, and working with Group Policy Objects (GPO) and Organizational Unit (OU) structures. Basic knowledge of networking concepts, including firewalls, routers and load balancers like NGINX and HA Proxy Basic understanding of cybersecurity principles and secure design techniques. Solid understanding of storage solutions, including SAN, NAS, and various storage devices. Proficient in scripting, particularly with PowerShell and Terraform, with the ability to execute essential commands. Basic knowledge of Backup and Restoration and Disaster Recovery solutions. Demonstrates knowledge in determining the appropriate Infrastructure sizing.
Posted 3 weeks ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description – Kafka/Integration Architect Position brief: Kafka/Integration Architect is responsible for designing, implementing, and managing Kafka-based streaming data pipelines and messaging solutions. This role involves configuring, deploying, and monitoring Kafka clusters to ensure the high availability and scalability of data streaming services. The Kafka Architect collaborates with cross-functional teams to integrate Kafka into various applications and ensures optimal performance and reliability of the data infrastructure. Kafka/Integration Architect play a critical role in driving data-driven decision-making and enabling real-time analytics, contributing directly to the company’s agility, operational efficiency and ability to respond quickly to market changes. Their work supports key business initiatives by ensuring that data flows seamlessly across the organization, empowering teams with timely insights and enhancing the customer experience. Location: Hyderabad Primary Role & Responsibilities: • Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing. • Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth. • Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency. • Collaborate with development teams to integrate Kafka into applications and services. • Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases, NoSQL data stores, and cloud storage, enabling faster data insights. • Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance. • Optimize Kafka configurations for performance, reliability, and scalability. • Automate Kafka cluster operations using infrastructure-as-code tools like Terraform or Ansible to increase operational efficiency and reduce manual overhead. • Provide technical support and guidance on Kafka best practices to development and operations teams, enhancing their ability to deliver reliable, high-performance applications. • Maintain documentation of Kafka environments, configurations, and processes to ensure knowledge transfer, compliance, and smooth team collaboration. • Stay updated with the latest Kafka features, updates, and industry best practices to continuously improve data infrastructure and stay ahead of industry trends. Required Soft Skills: • Strong analytical and problem-solving skills. • Excellent communication and collaboration skills. • Ability to translate business requirements into technical solutions. Working Experience and Qualification: Education: Bachelor’s or master’s degree in computer science, Information Technology or related field. Experience: Proven experience of 8-10 years as a Kafka Architect or in a similar role. Skills: • Strong knowledge of Kafka architecture, including brokers, topics, partitions and replicas. • Experience with Kafka security, including SSL, SASL, and ACLs. • Proficiency in configuring, deploying, and managing Kafka clusters in cloud and on-premises environments. • Experience with Kafka stream processing using tools like Kafka Streams, KSQL, or Apache Flink. • Solid understanding of distributed systems, data streaming and messaging patterns. • Proficiency in Java, Scala, or Python for Kafka-related development tasks. • Familiarity with DevOps practices, including CI/CD pipelines, monitoring, and logging. • Experience with tools like Zookeeper, Schema Registry, and Kafka Connect. • Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed environment. • Experience with cloud platforms like AWS, Azure, or GCP. Preferred Skills: (Optional) Kafka certification or related credentials, such as: Confluent Certified Administrator for Apache Kafka (CCAAK) Cloudera Certified Administrator for Apache Kafka (CCA-131) AWS Certified Data Analytics – Specialty (with a focus on streaming data solutions) Knowledge of containerization technologies like Docker and Kubernetes. Familiarity with other messaging systems like RabbitMQ or Apache Pulsar. Experience with data serialization formats like Avro, Protobuf, or JSON. Company Profile: WAISL is an ISO 9001:2015, ISO 20000-1:2018, ISO 22301:2019 certified, and CMMI Level 3 Appraised digital transformation partner for businesses across industries with a core focus on aviation and related adjacencies. We transform airports and relative ecosystems through digital interventions with a strong service excellence culture. As a leader in our chosen space, we deliver world-class services focused on airports and their related domains, enabled through outcome-focused next-gen digital/technology solutions. At present, WAISL is the primary technology solutions partner for Indira Gandhi International Airport, Delhi, Rajiv Gandhi International Airport, Hyderabad, Manohar International Airport, Goa, Kannur International Airport, Kerala, and Kuwait International Airport, and we expect to soon provide similar services for other airports in India and globally. WAISL, as a digital transformation partner, brings proven credibility in managing and servicing 135+Mn passengers, 80+ airlines, core integration, deployment, and real-time management experience of 2000+ applications vendor agnostically in highly complex technology converging ecosystems. This excellence in managed services delivered by WAISL has enabled its customer airports to be rated amongst the best-in-class service providers by Skytrax and ACI awards, and to win many innovation and excellence awards
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Salesforce is the global leader in customer relationship management (CRM) software. We pioneered the shift to cloud computing, and today we're delivering the next generation of social, mobile, and cloud technologies that help companies revolutionize the way they sell, service, market, and innovate-and become customer companies. We are the fastest growing of the top 10 enterprise software companies, the World's Most Innovative Company according to Forbes, and one of Fortune's 100 Best Companies to Work For. The CRM Database Sustaining Engineering Team deploys and manages some of the largest and most trusted databases in the world. Our customers rely on us to keep their data safe and highly available. Check out our "We are Salesforce Engineering" video below We are Salesforce Engineering About The Position As a Database Cloud Engineer , you will play that kind of mission-critical role in ensuring the reliability, scalability, and performance of Salesforce's vast cloud database infrastructure. You'll help power the backbone of one of the largest SaaS platforms in the world. We're looking for engineers who bring a DevOps mindset and deep database expertise to architect and operate resilient, secure, and performant database environments across public cloud platforms (AWS, GCP). You’ll collaborate across systems, storage, networking, and applications to deliver cloud-native reliability solutions at massive scale. The CRM Database Sustaining Engineering team is a fast-paced, dynamic, global team delivering and supporting databases & its cloud infrastructure to meet the substantial growth needs of the business. In this role, you will collaborate with other Application, Systems, Network, Database, Storage and other infrastructure Engineering teams in delivering innovative solutions in an agile, dynamic environment. You will be part of the Global Team and engage in 24*7 support responsibility within Europe. Being part of a global team, you will need to occasionally be flexible in your working hours to be in sync globally. You will also be investing yourself into Salesforce cloud database running on cutting edge cloud technology & responsible for its reliability. Job Requirements Bachelors in Computer Science or Engineering, or equivalent experience. A minimum of 8+ years of experience as a Database Engineer, or similar role is required. Expertise in Database and SQL performance tuning in one of the relational database. Knowledge and hands on experience with Postgres database is a plus Broad and deep knowledge of at least 2 relational databases, including Oracle, PostgreSQL & MySQL. Working knowledge of cloud platforms (such as AWS or GCP) is highly desirable and considered a strong asset. Experience with related cloud technologies: Docker, Spinnaker, Terraform, Helm, Jenkins, GIT, etc. Exposure to zookeeper fundamentals & Kubernetes is highly desirable. Working knowledge of SQL and at least one procedural language such as Python, Go, or Java, along with a basic understanding of C. A solid understanding of coding is highly preferred. Excellent problem-solving skills and Experienced with Production Incident Management / Root Cause analysis. Experience with mission critical distributed systems service, including supporting Database Production Infrastructure with 24x7x365 support responsibilities. Exposure to a fast pace environment with a large scale cloud infrastructure setup. Excellent speaking-listening-writing skills, attention to detail, proactive self-starter. Preferred Qualification Hands-on DevOps experience including CI/CD pipelines and container orchestration (Kubernetes, EKS/GKE). Cloud-native DevOps experience (CI/CD, EKS/GKE, cloud deployments). Familiarity with distributed coordination systems like Apache Zookeeper. Deep understanding of distributed systems, availability design patterns, and database internals. Monitoring and alerting expertise using Grafana, Argus, or similar tools. Automation experience with tools like Spinnaker, Helm, and Infrastructure as Code frameworks. Ability to drive technical projects from idea to execution with minimal supervision.
Posted 4 weeks ago
12.0 - 14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled Platform and DevOps Lead Engineer to design, build, and manage robust infrastructure platforms that support scalable, secure, observable application environments. This role focuses on enabling development efficiency, infrastructure automation, and operational excellence across hybrid cloud and on-prem platforms, while ensuring end-to-end visibility through advance observability (ELK) and monitoring practices. Key Responsibilities: Design, implement, and maintain scalable and secure platform infrastructure across cloud ( S3, ECS, OpenShift) and on-premises (Linux/Windows VMs) environments. Coordinate OS management and upgrades for Linux/Windows VMs. Lead capacity planning, lifecycle management, and decommissioning (Decom) of infrastructure and services. 12 to 14 years of hands-on experience on relevant tech stack Manage multiple micro-services used within team. Administer and maintain Active Directory(AD) configurations including ECS AD Groups for secure identity and access control. Manage/create OpenShift clusters for micro-services. Configure and maintain S3 buckets, ensure proper data security, lifecycle policies, and permissions. Oversee and maintain Autosys agent OpenShifts VM for scheduled job execution and automation. Handling of expired certificate renewals and lifecycle management of across critical systems. Integrate and maintain CyberArk for secure credential and secrets management. Develop, maintain, and optimize CI/CD pipelines. Implement and manage Application observability solutions to ensure deep visibility into app behavior. Set up/manage and operate ELK Stack (Elasticsearch, Logstash, Kibana) for centralized logging, analytics, and troubleshooting. Build/manage dashboards, alerts, and metrics to support incident detection, response, and root cause analysis. Ensure compliance with internal policies with FID requirements. Required skills: Python, PowerShell, Java (Spring), Autosys, Grid(IBM/CTI) Monitoring - ELK - (Elasticsearch, Logstash, kibana, metricbeat, filebeat), AppO CI/CD - (git, Jenkins, TeamCity, Ignite, Kafka/Zookeeper, LightSpeed, Openshifts (ECS), Udeploy-UrbanCode Deploy) Infrastructure - ( Capacity, Decom, FID, AD Groups, ECS AD Groups, S3, Openshift Clusters, Linux VM, Windows, Artifactory, Autosys Virtual Machine, cert, cyberArk ) Data Quality Check - ( Drool workbench, Java(spring), KIE API, REST) ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 4 weeks ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Experience: 7+ years Qualification - BE/BTech/MCA only Location – Mumbai Banking domain is must Working week – 5 days with alternate Saturdays working from client location as per Bank norms Notice Period: Immediate to 30 Days Job Title: Kafka Administrator Job Summary: We are seeking a skilled and proactive Kafka Administrator to manage and optimize our enterprise-grade messaging platform. The ideal candidate will have extensive experience with Apache Kafka , including ecosystem components such as Zookeeper , Kafka Connect , ksqlDB , Kafka Streams , and a deep understanding of KRaft mode , Docker , Linux , and Java/JVM tuning . Strong knowledge of TCP/IP networking , firewall rules , and DNS is essential for ensuring robust, secure, and high-performance Kafka infrastructure. Key Responsibilities: Install, configure, and administer Apache Kafka clusters in production and non-production environments. Manage Kafka brokers, Zookeeper nodes , and plan transition to KRaft-based architecture . Design and implement Kafka Connect pipelines , ksqlDB queries , and Kafka Streams applications. Handle Kafka security configurations (SSL, SASL, ACLs) and integrations with monitoring/logging systems. Perform performance tuning and JVM optimization for Kafka components and Java-based clients. Develop and maintain Shell scripts to automate Kafka operations and monitoring tasks. Use Docker to containerize Kafka ecosystem tools for deployment and testing. Implement backup and disaster recovery plans for Kafka data and configurations. Monitor and manage Kafka throughput, latency, disk usage, and cluster health . Troubleshoot network-level issues, including TCP/IP connectivity , firewall rules , load balancers , and DNS resolutions impacting Kafka availability. Work closely with developers, DevOps, and security teams to support application integration with Kafka. Maintain detailed documentation, runbooks, and knowledge bases for Kafka infrastructure. Technical Skills Required: Core Skills: Apache Kafka (3+ years in Kafka admin role) Zookeeper and Kafka KRaft (Raft-based consensus architecture) Kafka Connect, Kafka Streams, ksqlDB Java and JVM tuning (heap, GC, thread dump analysis) Shell Scripting (Bash/Unix scripting) Linux System Administration Docker for containerization of Kafka components Performance Optimization and resource tuning for Kafka clusters Networking & Security: TCP/IP , Firewalls , Load Balancers , DNS Kafka over SSL/SASL , setting up Kafka ACLs , and configuring secure client access Other Valuable Skills: Experience with Monitoring Tools : Prometheus, Grafana, ELK, Confluent Control Center Backup strategies using Kafka MirrorMaker or other replication tools Experience with distributed systems, HA/DR setups Cloud exposure (optional): Kafka on AWS MSK / Azure Event Hub / Confluent Cloud
Posted 4 weeks ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Qualification - BE/BTech/MCA only Location – Mumbai Banking domain is must Working week – 5 days with alternate Saturdays working from client location as per Bank norms Notice Period: Immediate to 30 Days Job Title: Senior Kafka Platform Technical Lead – BFSI Domain Location: Mumbai, India Experience: 7 to 14 Years Kafka Experience: Minimum 3+ Years hands-on experience Industry Domain: Mandatory experience in Banking, Financial Services, or Insurance (BFSI) Job Summary: We are looking for a highly experienced and motivated Senior Kafka Platform technical Lead to join our core platform team in Mumbai . The ideal candidate will have deep expertise in building and managing Apache Kafka ecosystems with a strong grasp of Zookeeper , KRaft , Docker , Shell scripting , Linux administration , and Java/JVM tuning . The candidate must have at least 3 years of hands-on Kafka experience within the Banking or BFSI domain . Key Responsibilities: Design, deploy, and manage high-throughput and resilient Kafka clusters in production environments. Lead Kafka cluster management including installation, configuration, upgrades, and performance tuning. Support Kafka architecture evolution from Zookeeper to KRaft mode . Configure and manage Kafka components : brokers, producers, consumers, topics, partitions, replication, schema registry, and connectors. Develop automation scripts using Shell/Bash to support operational efficiencies and reliability. Manage and deploy Kafka components using Docker and container orchestration best practices. Work closely with application developers to support Kafka integration , capacity planning, and throughput optimization. Monitor system health and ensure uptime, availability, and performance of Kafka platforms. Tune and monitor Java-based microservices with focus on JVM performance (GC, heap, thread, and CPU profiling) . Create and maintain detailed operational documentation, runbooks, and knowledge base. Technical Skills: Mandatory: Kafka (Apache/Confluent) – Strong production experience (minimum 3+ years) Zookeeper and KRaft (Kafka Raft Mode) – Deep knowledge of coordination layers and migration strategies Linux Administration – Hands-on with system performance tuning, process monitoring, and file systems Shell Scripting – Proficiency in Bash for automation and log analysis Docker – Containerizing Kafka services, managing container lifecycles Java – Experience developing/debugging JVM-based applications JVM Tuning – Proficient in analyzing garbage collection (GC), heap sizing, and thread dump analysis Preferred/Nice to Have: Experience with Confluent Platform (Connect, KSQL, Control Center) Understanding of Kafka security (SSL, SASL, ACLs) Familiarity with monitoring tools (Prometheus, Grafana, ELK, etc.) Experience in CI/CD pipelines using Git, Jenkins, Ansible, etc. Cloud platforms: Kafka on AWS/MSK, Azure, or GCP
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
DevOps / Cloud Infrastructure Engineer We are looking for an experienced DevOps / Cloud Infrastructure Engineer to join our team and help design, automate, and maintain highly scalable, secure, multi-tenant AWS infrastructure. This role requires a strong background in DevOps practices, cloud architecture, infrastructure-as-code, and system observability. Key Responsibilities Architect and maintain AWS infrastructure with multi-tenant support Implement Infrastructure as Code using Terraform, CDK, or CloudFormation Manage and scale Kubernetes / Amazon EKS clusters Automate deployments through CI/CD pipelines (CodePipeline, Jenkins, GitHub Actions) Set up monitoring and observability using Grafana, Loki, Prometheus, and CloudWatch Manage and optimize Kafka, Zookeeper, Redis, and Milvus (Vector DB) Enforce IAM policies, tagging standards, security guardrails, and audit trails Develop automation and tooling using Bash, Python, or Go Required Skills AWS Core (EC2, VPC, Lambda, RDS, IAM) | Kubernetes / EKS | Terraform / CDK / CloudFormation CI/CD (CodePipeline, Jenkins, GitHub Actions) | Kafka | Redis | Milvus | Monitoring & Logging Scripting (Python, Bash, Go) | IAM & Compliance (ref:hirist.tech)
Posted 4 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Business Unit: Cubic Corporation Company Details: When you join Cubic, you become part of a company that creates and delivers technology solutions in transportation to make people’s lives easier by simplifying their daily journeys, and defense capabilities to help promote mission success and safety for those who serve their nation. Led by our talented teams around the world, Cubic is committed to solving global issues through innovation and service to our customers and partners. We have a top-tier portfolio of businesses, including Cubic Transportation Systems (CTS) and Cubic Defense (CD). Explore more on Cubic.com. Job Details: Job Summary: As part of the Global Operations Systems Team, reporting to the IT Systems Team Leader, you will be a key liaison, working closely with colleagues and customers to ensure in-house or deployed systems and devices are kept current and functional. You will assist in implementing strategies for Central System and Device application deployment, and manage application installations and configuration. You will support and administer the Central System on cross-platform Operating Systems such as Windows, AIX, UNIX, and Linux in close coordination with the Infrastructure and Engineering groups. You will ensure Central System applications and Device application migrations, upgrades, and installations are well-rehearsed and documented prior to conducting official installations. Essential Job Duties and Responsibilities Perform day-to-day application administration. Monitors and manages application and infrastructure health. Manages and controls application software licenses. Installs and upgrades all applications across on-premise and Cloud platforms. Troubleshooting issues with containerized applications running on Docker or Kubernetes. Application deployments using manual methods and automation tools. Maintains secure applications. Work collaboratively with Project Managers, Operations teams, Test teams, Developers, and Clients in relation to supporting application deployments and changes. Responsible for effective deployments into the live and non-production ensuring impact to operational service is minimized. Provides sign-off on various release gateways. Provides guidance and recommendations on all backend OS’s and infrastructure in relation to application function and performance. Provides assistance with Windows, UNIX, LINUX based platforms in relation to application function and performance. Monitors system backup/restore/failover on device software and hardware. Mentors staff, is a key liaison to peers and other system analysts. Conducts performance tuning; optimization of resource configuration on all platforms and LAN. Provides assistance in the configuration of Routers, Firewalls, and Load Balancers as related to application requirements. Assists in the installation and configuration of databases, including database migration tasks when performing application upgrades. Assists in installing, configuring, and operating monitoring software such as SolarWinds, Dexda, and Azure Insights. Develop documentation describing installation-specific configurations and processes. Interacts with application and infrastructure vendors and distributors. Participates as a primary contact in the 24x7 on-call support rotation. Comply with Cubic’s values and adhere to all company policies and procedures. In particular, comply with the code of conduct, quality, security and occupational health, safety, and environmental policies and procedures. In addition to the duties and responsibilities listed, the job holder is required to perform other duties assigned by their manager from time to time, as may be reasonably required of them. Minimum Job Requirements: Essential A university degree in a numerate subject (e.g., Computer Science, Maths, Engineering, Natural Science) or a relevant field. OR equivalent years of experience in lieu of a degree. Five (5)+ years of experience in maintaining applications, both third-party COTS (Apache Kafka, Zookeeper, Apache Storm, Apigee) and internally developed. Core understanding of CI/CD pipelines such as Jenkins, Octopus Deploy, Azure DevOps, or GitHub Actions Knowledge and experience administering various Windows and UNIX Operating Systems, including bash scripting. Knowledge of databases, including SQL Server and Oracle. Knowledge of SQL in general. In-depth understanding of System Administration/Analyst methodology and principles. Proficient with all Microsoft Office applications. Desirable ITIL experience Understanding of Windows applications such as Microsoft CRM and SAP Scripting ability to automate manual day-to-day tasks using tools such as Ansible and Hashi Corp Vault Knowledge of Azure & Cloud based technologies Knowledge of Kubernetes or Docker #LI-NB1 Worker Type: Employee
Posted 1 month ago
5.0 years
0 Lacs
Andhra Pradesh
On-site
SRE is part of an application team matrixed to the Cloud Services Team to perform a specialized function that focuses on the automation of availability, performance, maintainability and optimization of business applications on the platform. To be effective in the position, a SRE must have strong AWS, Terraform and GitHub skills as the platform is 100% automated. All changes being applied to the environment must be automated with Terraform and checked into GitHub version control. A matrixed SRE will be provided the Reliability Engineering role in the accounts they are responsible for. This role includes the rights to perform all the necessary functions required to support the applications in the IaaS environment. An SRE is required to adhere to all Enterprise processes and controls (ie ChgMgt, Incident and Problem Mgmt, etc) and ensure alignment to Cloud standards and best practices. Ability to write and implement infrastructure as code and platform automation Experience implementing Infrastructure as Code Terraform Collaborate with Cloud Services and Application teams to deliver projects Deploy infrastructure as code (IaC) releases to QA, staging, and production environments Responsible for building the automation for any account customizations required by the application custom roles, policies, security groups, etc DevOps Engineer Should be having Minimum 5years of working experience especially as DevOps Engineer/SRE Should be working as IC role with very good communication skills Verbal & Written OS Knowledge Should have 3Years hands on working experience on Linux SCM Should have 3 years of hands on working experience in Git Preferably GitHub Enterprise Cloud Experience: Should have a thorough knowledge of AWS Certification is preferred CICD Tool 4 Years hands on working experience in Jenkins If not any other CICD tool EKS CICD Working experience with Jenkins and if not any other CICD tool for EKS. Jenkins Pipeline script hands on experience with pipeline script is preferred. Containers Minimum 1 Year hands-on working experience in Docker/Kubernetes. Preferred if candidate is certified CKA(Certified Kubernetes Administrator) Mulesoft Runtime Fabric Install configure Anypoint Runtime Fabric environment and deploy application on runtime fabric. Cloud Infra Provisioning Tool 2 Years hands on working experience in Terraform/ Terraform Enterprise/Cloud Formation Application Provisioning Tool 2 Years hands on working experience in Puppet/Ansible/Chef Data Components Should have good knowledge and Min 1 year of working experience with ELK, Kafka, Zookeeper HDF knowledge added advantage Tools Consul Vault Knowledge is added advantage Scripting Knowledge 3 years hands on working experience on any scripting language Shell/Python/Ruby etc Very good troubleshooting skills and should have hands on working experience in production deployments and Incidents. Mulesoft Knowledge Added advantage Java SpringBoot Knowledge Added advantage. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Business Unit: Cubic Corporation Company Details: When you join Cubic, you become part of a company that creates and delivers technology solutions in transportation to make people’s lives easier by simplifying their daily journeys, and defense capabilities to help promote mission success and safety for those who serve their nation. Led by our talented teams around the world, Cubic is committed to solving global issues through innovation and service to our customers and partners. We have a top-tier portfolio of businesses, including Cubic Transportation Systems (CTS) and Cubic Defense (CD). Explore more on Cubic.com. Job Details: Job Summary: As part of the Global Operations Systems Team, reporting to the IT Systems Team Leader, you will be a key liaison, working closely with colleagues and customers to ensure in-house or deployed systems and devices are kept current and functional. You will assist in implementing strategies for Central System and Device application deployment, and manage application installations and configuration. You will support and administer the Central System on cross-platform Operating Systems such as Windows, AIX, UNIX, and Linux in close coordination with the Infrastructure and Engineering groups. You will ensure Central System applications and Device application migrations, upgrades, and installations are well-rehearsed and documented prior to conducting official installations. Essential Job Duties And Responsibilities Perform day-to-day application administration. Monitors and manages application and infrastructure health. Manages and controls application software licenses. Installs and upgrades all applications across on-premise and Cloud platforms. Troubleshooting issues with containerized applications running on Docker or Kubernetes. Application deployments using manual methods and automation tools. Maintains secure applications. Work collaboratively with Project Managers, Operations teams, Test teams, Developers, and Clients in relation to supporting application deployments and changes. Responsible for effective deployments into the live and non-production ensuring impact to operational service is minimized. Provides sign-off on various release gateways. Provides guidance and recommendations on all backend OS’s and infrastructure in relation to application function and performance. Provides assistance with Windows, UNIX, LINUX based platforms in relation to application function and performance. Monitors system backup/restore/failover on device software and hardware. Mentors staff, is a key liaison to peers and other system analysts. Conducts performance tuning; optimization of resource configuration on all platforms and LAN. Provides assistance in the configuration of Routers, Firewalls, and Load Balancers as related to application requirements. Assists in the installation and configuration of databases, including database migration tasks when performing application upgrades. Assists in installing, configuring, and operating monitoring software such as SolarWinds, Dexda, and Azure Insights. Develop documentation describing installation-specific configurations and processes. Interacts with application and infrastructure vendors and distributors. Participates as a primary contact in the 24x7 on-call support rotation. Comply with Cubic’s values and adhere to all company policies and procedures. In particular, comply with the code of conduct, quality, security and occupational health, safety, and environmental policies and procedures. In addition to the duties and responsibilities listed, the job holder is required to perform other duties assigned by their manager from time to time, as may be reasonably required of them. Minimum Job Requirements: Essential A university degree in a numerate subject (e.g., Computer Science, Maths, Engineering, Natural Science) or a relevant field. OR equivalent years of experience in lieu of a degree. Five (5)+ years of experience in maintaining applications, both third-party COTS (Apache Kafka, Zookeeper, Apache Storm, Apigee) and internally developed. Core understanding of CI/CD pipelines such as Jenkins, Octopus Deploy, Azure DevOps, or GitHub Actions Knowledge and experience administering various Windows and UNIX Operating Systems, including bash scripting. Knowledge of databases, including SQL Server and Oracle. Knowledge of SQL in general. In-depth understanding of System Administration/Analyst methodology and principles. Proficient with all Microsoft Office applications. Desirable ITIL experience Understanding of Windows applications such as Microsoft CRM and SAP Scripting ability to automate manual day-to-day tasks using tools such as Ansible and Hashi Corp Vault Knowledge of Azure & Cloud based technologies Knowledge of Kubernetes or Docker Worker Type: Employee
Posted 1 month ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
AWS/Azure/GCP, Linux, shell scripting, IaaC, Docker, Kubernetes, Mongo, MySQL, Solr, Jenkins, Github, Automation, TCP / HTTP network protocols A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. Ensure high availability of the infrastructure, administration, and overall support. Strong analytical skills and troubleshooting/problem solving skills - root cause identification and pro-active service improvement, staying up to date on technologies and best practices. Team and Task management with tools like JIRA adhering to SLAs. A Clear understanding of HTTP / Network protocol concepts, designs & operations - TCP dump, Cookies, Sessions, Headers, Client Server Architecture. More than 5+ years of working experience in AWS/Azure/GCP Cloud Platform. Core strength in Linux and Azure infrastructure provisioning including VNet, Subnet, Gateway, VM, Security groups, MySQL, Blob Storage, Azure Cache, AKS Cluster etc. Expertise with automating Infrastructure as a code using Terraform, Packer, Ansible, Shell Scripting and Azure DevOps. Expertise with patch management, APM tools like AppDynamics, Instana for monitoring and alerting. Knowledge in technologies including Apache Solr, MySQL, Mongo, Zookeeper, RabbitMQ, Pentaho etc. Knowledge with Cloud platform including AWS and GCP are added advantage. Ability to identify and automate recurring tasks for better productivity. Ability to understand, implement industry standard security solutions. Experience in implementing Auto scaling, DR, HA, Multi-region with best practices is added advantage. Ability to work under pressure, managing expectations from various key stakeholders. Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly skilled and motivated Senior Kafka Infrastructure Engineer to join our platform engineering team. This role is ideal for someone who is deeply experienced with the Apache Kafka ecosystem and passionate about building scalable, reliable, and secure streaming infrastructure. Key Responsibilities: Design, deploy, manage, and scale highly available Kafka clusters in production environments. Administer Kafka components, including Brokers, ZooKeeper/KRaft, Topics, Partitions, Schema Registry, Kafka Connect, and Kafka Streams. Deploy Kafka on Kubernetes clusters using Strimzi operators, Helm charts, and Terraform. Implement autoscaling, resource optimisation, network policies, and persistent storage (PVC) configurations. Monitor Kafka health and performance using Prometheus, Grafana, JMX Exporter, and custom metrics. Secure Kafka infrastructure with TLS, SASL, ACLs, Kubernetes secrets, and RBAC. Automate Kafka provisioning and deployment in AWS, GCP, or Azure (preferably with EKS, GKE, or AKS). Integrate Kafka infrastructure management into CI/CD pipelines using ArgoCD, Jenkins, etc. Build and maintain containerised Kafka deployment workflows and release pipelines. Required Skills and Experience: Deep understanding of Kafka architecture and internals. Extensive hands-on experience managing Kafka in cloud-native environments. Proficiency with Kubernetes and container orchestration concepts. Experience with Infrastructure as Code tools like Helm and Terraform. Solid grasp of cloud-native security practices and authentication mechanisms. Proven track record in automation and incident resolution. Strong debugging, analytical, and problem-solving skills. Soft Skills: Proactive, ownership-driven, and automation-first mindset. Strong verbal and written communication skills. Comfortable working collaboratively with SREs, developers, and other cross-functional teams. Detail-oriented and documentation-focused. Willingness to mentor and share knowledge with peers. Location: Hybrid (Gurgaon) Work Hours: Aligned with USA time zones Urgency: Must be able to join within 1 month
Posted 1 month ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Role Summary: We are seeking a skilled DevOps Engineer with a strong focus on Continuous Integration and Continuous Deployment (CI/CD) to join our engineering team. In this role, you will be responsible for designing, implementing, and maintaining robust CI/CD pipelines that enable fast, secure, and reliable software delivery. You will work closely with development, QA, and operations teams to automate and streamline the software release process. Key Responsibilities: Design, develop, and maintain scalable CI/CD pipelines using tools such as Jinkins, GitLab CI, GitHub Actions, git, Jenkins, TeamCity, Ignite, Kafka/Zookeeper, LightSpeed, Openshifts (ECS), Udeploy-UrbanCode Deploy. Automate build, test, and deployment workflows for various application environments (development, staging, production). Integrate unit testing, static code analysis, security scanning, and performance tests into pipelines. 6 to 8 years of hands-on experience Manage artifact repositories and development strategies. Collaborate with developers and QA engineers to improve software development practices and shorten release cycles. Monitor and optimize pipeline performance to ensure fast feedback and deployment reliability. Ensure compliance with security and governance policies throughout the CI/CD process. Troubleshoot pipeline failures, build issues, and deployment problems across environments Required skills: Monitoring - ELK - (Elasticsearch, Logstash, kibana, metricbeat, filebeat), AppO CI/CD - (git, Jenkins, TeamCity, Ignite, Kafka/Zookeeper, LightSpeed, Openshifts (ECS), Udeploy-UrbanCode Deploy) Data Quality Check - Drool workbench, Java(spring), KIE API, REST Qualifications: 5+ years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills CI/CD, DevOps, GitLab. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 month ago
0 years
9 - 9 Lacs
Bengaluru
On-site
Associate - Production Support Engineer Job ID: R0388737 Full/Part-Time: Full-time Regular/Temporary: Regular Listed: 2025-06-27 Location: Bangalore Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What we’ll offer you As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your skills and experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How we’ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Why Join 7-Eleven Global Solution Center? When you join us, you'll embrace ownership as teams within specific product areas take responsibility for end- to-end solution delivery, supporting local teams and integrating new digital assets. Challenge yourself by contributing to products deployed across our extensive network of convenience stores, processing over a billion transactions annually. Build solutions for scale, addressing the diverse needs of our 84,000+ stores in 19 countries. Experience growth through cross-functional learning, encouraged and applauded at 7-Eleven GSC. With our size, stability, and resources, you can navigate a rewarding career. Embody leadership and service as 7-Eleven GSC remains dedicated to meeting the needs of customers and communities. Why We Exist, Our Purpose and Our Transformation? 7-Eleven is dedicated to being a customer-centric, digitally empowered organization that seamlessly integrates our physical stores with digital offerings. Our goal is to redefine convenience by consistently providing top- notch customer experiences and solutions in a rapidly evolving consumer landscape. Anticipating customer preferences, we create and implement platforms that empower customers to shop, pay, and access products and services according to their preferences. To achieve success, we are driving a cultural shift anchored in leadership principles, supported by the realignment of organizational resources and processes. At 7-Eleven we are guided by our Leadership Principles . Each principle has a defined set of behaviors which help guide the 7-Eleven GSC team to Serve Customers and Support Stores. Be Customer Obsessed Be Courageous with Your Point of View Challenge the Status Quo Act Like an Entrepreneur Have an “It Can Be Done” Attitude Do the Right Thing Be Accountable Job Title: Software Engineer I (Java) Location: Bangalore About the role: As a Software Engineer I (Java), you will design and develop responsive, efficient, re-usable applications that are used by end customers, store personnel and business users. You will be fully utilizing cloud-based services to build and deploy the applications. You will be responsible for building the best practices, defining processes, working with multiple stakeholders, ensuring top quality of the product. The ideal candidate is a highly organized individual with a passion for user experience and experience building impactful and meaningful customer experiences. Why choose 7-Eleven? Because we are disruptors. We are makers. We are innovators. We are here to make a lasting change, and that change starts with you. Key Responsibilities: Designing and implementing software using Java. Ensuring code quality through unit, integration, and end-to-end testing. Optimizing applications for maximum performance. Contributing to DevOps activities (CI/CD, infrastructure, etc.). Collaborating with distributed teams on cross-functional deliveries. Troubleshooting, analyzing, and resolving integration and production issues. Required Qualifications: 3-5 years of experience in Java, Spring, Hibernate, Microservices. 3+ years of experience in Spring-related technologies such as Spring Core, Spring Boot, Spring MVC, and Spring Integration. 3+ years of experience in any NoSQL database (Cassandra/MongoDB/DynamoDB). 3+ years of experience in application analysis, maintenance and support. 3+ years of experience in various cloud services (AWS, Azure, GCP). Experience with Distributed Technologies like Kafka, Spark, Zookeeper. Experience in software development life cycle activities. Proficient in testing frameworks to design and integrate quality tests during development. Knowledge of CI/CD Pipelines built with GitHub, Maven, and Jenkins, etc. Excellent written and verbal communications skills. Experience leading and mentoring team members to help grow to their full potential. Ability to understand business requirements and translate into technical requirements. 7-Eleven Global Solution Center is an Equal Opportunity Employer committed to diversity in the workplace. Our strategy focuses on three core pillars – workplace culture, diverse talent and how we show up in the communities we serve. As the recognized leader in convenience, the 7-Eleven family of brands embraces diversity, equity and inclusion (DE+I). It’s not only the right thing to do for customers, Franchisees and employees—it’s a business imperative. Privileges & Perquisites: 7-Eleven Global Solution Center offers a comprehensive benefits plan tailored to meet the needs and improve the overall experience of our employees, aiding in the management of both their professional and personal aspects. Work-Life Balance: Encouraging employees to unwind, recharge, and find balance, we offer flexible and hybrid work schedules along with diverse leave options. Supplementary allowances and compensatory days off are provided for specific work demands. Well-Being & Family Protection: Comprehensive medical coverage for spouses, children, and parents/in-laws, with voluntary top-up plans, OPD coverage, day care services, and access to health coaches. Additionally, an Employee Assistance Program with free, unbiased and confidential expert consultations for personal and professional issues. Top of Form Wheels and Meals: Free transportation and cafeteria facilities with diverse menu options including breakfast, lunch, snacks, and beverages, customizable and health-conscious choices. Certification & Training Program: Sponsored training for specialized certifications. Investment in employee development through labs and learning platforms. Hassel free Relocation: Support and reimbursement for newly hired employees relocating to Bangalore, India.
Posted 1 month ago
4.0 years
0 Lacs
India
On-site
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. You’ll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life At Medtronic, we push the limits of what technology can do to make tomorrow better than yesterday and that makes it an exciting and rewarding place to work. We value what makes you unique. Be a part of a company that thinks differently to solve problems, make progress, and deliver meaningful innovations. As Sr. Database Architect, you will be a part of our Global DBA team responsible for providing design and support. The Database Architect role is pivotal in establishing our Database As A Service model for cloud-based and on premise database deployments. The Database Architect works as a part of a team of database architects, solution architects, engineers and business customers to bring industry best practices to database design, provisioning, automation, security, reliability and availability. This role will work with a Global IT team to engineer solutions to complex business problems while leveraging open source and traditional databases. Your commitment to drive and managing high quality execution, operational excellence and delivering technology solutions that meet business needs and optimize customer experience will have a direct impact on the organization, and ultimately, affect the lives of millions. We believe that when people from different cultures, genders, and points of view come together, innovation is the result —and everyone wins. Medtronic walks the walk, creating an inclusive culture where you can thrive. Responsibilities may include the following and other duties may be assigned: Participate in a Global team of Database Architects, Engineers and Administrators to provide technical solution to projects that engage MDT database platforms. Provide hands on technical support across the various on premise and cloud-based database offerings. Participate in the design of our Database As A Service solutions for cloud-based database offerings. Participate in the design of our automated provisioning solutions for on premise and cloud-based database offerings using modern automation tools. Partner with various IT Infrastructure teams and to fulfill project needs while providing exceptional customer outcomes. Provide technical leadership and governance for the big data team, ensuring the implementation of solution architecture within the Mongo & Hadoop ecosystem. Design and implement scalable MongoDB architecture (replica sets, sharding, high availability) Ensure data security and compliance with industry standards, including Kerberos integration and encryption. Architect overview of the administration, configuration, and maintenance of Mongo & Hadoop clusters and associated databases on On-Prem & Cloud-base(AWS). Provide experience with database monitoring technologies for on premise and cloud-based databases including Prometheus, Grafana, SQL Studio, SolarWinds, RDS Console, CloudWatch, Ambari, Cloudera Manager, or custom scripts to track performance metrics and detect issues Provide experience with desktop client tools including MongoDB Database Tools, PGAdmin, DBeaver, HeidiSQL, Navicat, SQL Developer, SQL Studio, Toad, etc. Provide experience with database security products and techniques including auditing, encryption, virtual private databases, row level security, etc. Provide experience with enterprise backup and recovery tools and techniques for providing full, incremental, online, offline and transaction log backups. Work well with IT teams and Business Partners to identify and implement opportunities to improve database performance, reliability, scalability and availability. Willingness to contribute, learn and grow as a member of a team that supports a variety of technologies. Required Knowledge and Experience: Bachelor’s Degree Minimum of 4-5 Years of Mongo DBA Architect, Strong Internals, Aggregation framework, Indexing and experience with MongoDB Atlas. Minimum of 5+ Years of Hadoop DBA Architect, experience with Hadoop distributions (Cloudera, Hortonworks, or Apache) and Strong knowledge of Hadoop ecosystem components: HDFS, YARN, Hive, HBase, Spark, Oozie, Zookeeper, etc. Minimum of 2 years’ experience with open source or cloud-based databases. Nice to Have Recent experience with Apache Hadoop versions 3.4 and 3.3 Proficiency in Hadoop ecosystem tools like Pig, Hive, HBase, and Oozie Experience integrating MongoDB with big data ecosystems (Kafka, Spark, etc.) is a plus. Recent experience with MariaDB, PostgreSQL, Snowflake, SQL Server or Oracle databases in addition to Hadoop & Mongo DB. Recent experience with capacity planning and estimating the requirements for the Mongo & Hadoop clusters. Recent experience deploys in Hadoop clusters using Apache source or distribution sources like Cloudera. Recent experience managing the size of Hadoop clusters based on the data to be stored in HDFS. Recent experience deploys of Mongo DB clusters using MongoDB OPS Manager or Atlas CLI. Ability to manage and review Hadoop & Mongo log files. Recent experience with IAM tools and automated security provisioning for on premise and cloud-based database technologies. Recent experience with automated database provisioning using Terraform, CloudFormation, etc. Recent experience with scripting languages like MongoDB Shell, Perl, PowerShell, BASH, KSH, etc. Recent experience with coding languages Java, Python, JavaScript and SQL Recent experience provisioning databases on Microsoft Azure, Amazon EC2 or Amazon RDS. Recent experience with modern backup approaches and tools including NetBackup or Data Domain for on premise and cloud-based databases. Significant database development or support experience with application development and implementation teams. Recent experience with DevOps and software engineering technologies . Proficiency in Unix, Linux and Windows Server operating systems Knowledge of SQL commands for data retrieval and manipulation Understanding of various DBMS types (i.e., relational, columnar, non-relational) Proficiency in installation, configuration, and maintenance of different DBMS platforms Proficiency in data migration between different database systems Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Benefits & Compensation Medtronic offers a competitive Salary and flexible Benefits Package A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage. This position is eligible for a short-term incentive called the Medtronic Incentive Plan (MIP). About Medtronic We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions. Our Mission — to alleviate pain, restore health, and extend life — unites a global team of 95,000+ passionate people. We are engineers at heart— putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary.
Posted 1 month ago
6.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. Identify the opportunity for automation and design/build the end to end solutions. Coordinate and embed technical standards and help other team members to gain knowledge on UI development and UI integration. Participate in review of design/code/test plan and test results Bring continuous improvements/efficiencies to the software or business processes by utilizing software engineering tools and various innovative techniques, and reusing existing solutions. By means of automation, reduces design complexity, reduces time to response, and simplifies the client/end-user experience. Identify and communicate best practices for front-end engineering Present demos of the software products to stakeholders and internal/external customers, using knowledge of the product/solution and technologies to influence the direction and evolution of the product/solution. Requirements To be successful in this role, you should meet the following requirements: Minimum 6 to 8 years of software development experience, including min 5 years of UI development experience. Experience in UI development and UI integration, Javascript (ES5, ES6), HTML5, CSS3, React.js, Angular,Web/UI technologies: React JS/Angular any MVC/MVVM JS Framework. Solid knowledge Nodejs, npm, webpack, Ajax, Jasmine/Chai/JEST/Mocha (JS Test Frameworks), CSS libraries . Deep knowledge of React practices and commonly used modules based on extensive work experience. Proficient understanding of web markup, including HTML5, CSS3, Restful Web Services and Node technologies, Java, Hibernate, Spring, SpringBoot, Web services,Java, Hibernate, Spring, SpringBoot, Web service. Practitioner of agile working practices and ability to apply agile methodology and principles. MUST have Hands-on experience in ReactJs/AngularJS and delivered project on ReactJS/AngularJs and played key Role Knowledge and understanding of accessibility principles and techniques Understanding of SDLC and experience developing in an Agile environment Strong information architecture skills Knowledge of DevOps tooling (e.g. Jenkins, Git, Maven). Experience in working with API management platform (e.g. Mule gateway, Zookeeper,Kong) Should have knowledge of Continuous integration and continuous deployment using Jenkins. Experience with test-driven development. You’ll achieve more when you join HSBC wwwhsbccom/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India
Posted 1 month ago
40.0 years
8 - 12 Lacs
Karnāl
On-site
Job description Title: Java Developer Experience: 3-5Yrs Location: Karnal, Haryana Job Type: Full-time Technology Stack (Must Have): Java 8+, Spring Boot, Hibernate, Git, Shell Scripting Good to Have : Jetty, Apache Maven, Apache Kafka, Apache Zookeeper, Docker, Kubernetes, SSL/TLS What we’re looking for: We are seeking a Java Developer who’ll be responsible for design, development, modification, debug and/or maintenance of software systems. As part of this opportunity, you will be working with a fast-paced and rapidly growing team of really sharp techies from IITs, NITs, and alike for Global clients. Job Responsibilities: Development, Basic testing, and Problem-solving Work closely with Senior Developers to understand the task level requirements and get the the desired job done Produce excellent quality of code, adhering to expected coding standards and industry best practices Follow approved life cycle methodologies, create design documents, and perform program coding and testing Should be able to think through possible pitfalls and challenges and get support from senior developers Assist in building/upgrading API Infrastructure Required Skills: Programming skills in Core Java, Spring Boot with over 3 yr of experience Experience in Design, Software development, and Testing Proficient knowledge of coding principles Good knowledge about REST / Web Services Problem-solving/ Troubleshooting skills Good Communication Skills Proficiency in version control software such as GIT Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Familiarity with Agile/Scrum methodologies About Us: We are a software development studio with expertise in App & Web Development. We work with clients worldwide and have already shipped over a dozen products for several multi-million dollar startups. The Core team is IIT Delhi’10 alumni with a combined experience of over 40 years of corporates and startups in New York, Bengaluru, and Delhi. Founders have experience of working with large global MNCs, as well as building startups with successful exits including a funded startup in the US. We're based in the small, beautiful, and developed city of Karnal in Haryana. More about us here: www.hcode.tech Why Work with Us: Strong growth opportunities Very well balanced work-life culture having sports options such as basketball, badminton, TT in premises; large nature feel office area with open working area Corporate Health Insurance for Self, Spouse, and Children Startup pace in a strict 5-day Mon-Fri working format No-Politics work environment built on merit and accountability Job Types: Regular / Permanent, Full-time Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Job Type: Full-time Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Karnal, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Present and Expected CTC? How many Years of Experience you have as a coder? Experience: Java: 3 years (Required) Location: Karnal, Haryana (Required) Work Location: In person
Posted 1 month ago
10.0 years
3 - 6 Lacs
Noida
On-site
Through our dedicated associates, Conduent delivers mission-critical services and solutions on behalf of Fortune 100 companies and over 500 governments - creating exceptional outcomes for our clients and the millions of people who count on them. You have an opportunity to personally thrive, make a difference and be part of a culture where individuality is noticed and valued every day. The candidate will work on modernizing high-volume, large-scale, multi-tiered transaction processing production system to cloud. Working in Agile software development lifecycle methodology, he will analyze the requirements, provide solution and mentor junior developers. Will design and develop near real-time applications using various Java based cloud technologies. She / He will work as technical lead, providing solution for modernization effort. He / She will also analyze new requirements and provide estimates for new development, mentor team member and in charge for the technical delivery of the project / module. Very good verbal and written communication 10+ Years of experience in developing applications in Java Should have solid experience in modernizing monolith to cloud and micro services Good experience Spring MVC, Spring Boot, Angular, and Nodejs Experience in technologies like Kafka, JMS, Zookeeper, Spring MVC, Spring boot, Micro services, Azure, AWS, Mongo DB, Elastic, Kabana, Kafka, high volume Transaction Processing. Experience in designing and implementation role Exposure & implementation knowledge of IOT frameworks and MQQT Experience in designing and developing application from requirement / use cases to production Experience / Exposure to cloud technologies and deploying application is cloud environment and containers Additional Desired Skills: Creative problem solving skills Work collaboratively with other members of the project team to ensure timely delivery of high quality delivery enterprise applications Plan and estimate development work needed to implement assigned tasks Transform complex requirements into working, maintainable enterprise-level solutions Perform detailed application design as appropriate Author and maintain design and technical documentation necessary Provide leadership to other team members to deliver high quality systems on schedule Knowledge on Source code version (SVN) Tracking tools ( JIRA, Bugzilla etc) Participates in code reviews. Participates in software design meetings and analyzes user needs to determine technical requirements. Consults with end user to prototype, refine, tests, and debugs programs to meet needs. Conducts tasks and assignments as directed. Works under minimal supervision with some latitude for independent judgment. Conduent is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, creed, religion, ancestry, national origin, age, gender identity, gender expression, sex/gender, marital status, sexual orientation, physical or mental disability, medical condition, use of a guide dog or service animal, military/veteran status, citizenship status, basis of genetic information, or any other group protected by law. People with disabilities who need a reasonable accommodation to apply for or compete for employment with Conduent may request such accommodation(s) by submitting their request through this form that must be downloaded: click here to access or download the form. Complete the form and then email it as an attachment to FTADAAA@conduent.com. You may also click here to access Conduent's ADAAA Accommodation Policy. At Conduent we value the health and safety of our associates, their families and our community. For US applicants while we DO NOT require vaccination for most of our jobs, we DO require that you provide us with your vaccination status, where legally permissible. Providing this information is a requirement of your employment at Conduent.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough