Jobs
Interviews

1633 Grafana Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% DevOps Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for professionals skilled in infrastructure automation, CI/CD pipelines, cloud computing, and monitoring tools. Proficiency in Terraform, Kubernetes, Docker, and cloud platforms is required. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Implement and manage CI/CD pipelines for efficient software deployment. Automate infrastructure provisioning using tools like Terraform, Cloud Formation, or Ansible. Monitor system performance, troubleshoot issues, and ensure high availability and manage cloud environments (AWS, GCP, Azure) for scalability and security. Collaborate with development teams to ensure smooth integration and deployment processes. Required Skills: . Proficiency with CI/CD tools (Jenkins, GitLab CI, Circle CI) and infrastructure automation (Terraform, Ansible). Strong experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Familiarity with version control (Git), system monitoring tools (Prometheus, Grafana) and scripting languages (Python, Bash) for automation. Nice to Have:. Experience with server less architectures and microservices. Knowledge of security best practices and compliance (IAM, encryption). What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Mumbai

Work from Office

AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.

Posted 1 month ago

Apply

1.0 - 4.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: . Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have:. Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

3.0 - 8.0 years

13 - 18 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% DevOps Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for professionals skilled in infrastructure automation, CI/CD pipelines, cloud computing, and monitoring tools. Proficiency in Terraform, Kubernetes, Docker, and cloud platforms is required. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Implement and manage CI/CD pipelines for efficient software deployment. Automate infrastructure provisioning using tools like Terraform, Cloud Formation, or Ansible. Monitor system performance, troubleshoot issues, and ensure high availability and manage cloud environments (AWS, GCP, Azure) for scalability and security. Collaborate with development teams to ensure smooth integration and deployment processes. Required Skills: . Proficiency with CI/CD tools (Jenkins, GitLab CI, Circle CI) and infrastructure automation (Terraform, Ansible). Strong experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Familiarity with version control (Git), system monitoring tools (Prometheus, Grafana) and scripting languages (Python, Bash) for automation. Nice to Have:. Experience with server less architectures and microservices. Knowledge of security best practices and compliance (IAM, encryption). What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

6 - 10 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Kolkata

Work from Office

AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.

Posted 1 month ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% DevOps Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for professionals skilled in infrastructure automation, CI/CD pipelines, cloud computing, and monitoring tools. Proficiency in Terraform, Kubernetes, Docker, and cloud platforms is required. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Implement and manage CI/CD pipelines for efficient software deployment. Automate infrastructure provisioning using tools like Terraform, Cloud Formation, or Ansible. Monitor system performance, troubleshoot issues, and ensure high availability and manage cloud environments (AWS, GCP, Azure) for scalability and security. Collaborate with development teams to ensure smooth integration and deployment processes. Required Skills: . Proficiency with CI/CD tools (Jenkins, GitLab CI, Circle CI) and infrastructure automation (Terraform, Ansible). Strong experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Familiarity with version control (Git), system monitoring tools (Prometheus, Grafana) and scripting languages (Python, Bash) for automation. Nice to Have:. Experience with server less architectures and microservices. Knowledge of security best practices and compliance (IAM, encryption). What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: . Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have:. Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

3.0 - 8.0 years

13 - 18 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Pune

Work from Office

: Job Title Technology Service Analyst, AS LocationPune, India Corporate TitleAS Role Description At the heart of Deutsche Bank's client franchise, is the Corporate Bank (CB), a market leader in Cash Management, Trade Finance & Lending, Securities Services and Trust & Agency services. Focusing on the Treasurers and Finance Departments of Corporate and Commercial clients and Financial Institutions across the Globe, our Universal Expertise and Global Network allows us to offer truly integrated and effective solutions. You will be operating within Corporate Bank Production as a Production Support Engineer in Payments domain. Payments Production domain is a part of Cash Management under Deutsche Bank Corporate Banking division which supports mission critical payments processing and FX platforms for multiple business lines like High Value/Low value / Bulk / Instant / Cheques payments. Team provides 24x7 support and follows follow the sun model to provide exceptional and timebound services to the clients. Our objective at Corporate Bank Production is to consistently strive to make production better which ensures promising End To End experience for our Corporate Clients running their daily Cash Management Business through various access channels. We also implement, encourage, and invest in building Engineering culture in our daily activities to achieve the wider objectives. Our strategy leads to attain reduced number of issues, provide faster resolution on issues, and safeguard any changes being made on our production environment, across all domains at Corporate Bank. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Acting as a Production Support Analyst for the CB production team providing second level of support for the applications under the tribe working with key stakeholders and team members across the globe in 365 days, 24/7 working model As an individual contributor and prime liaison for the application suite into the incident, problem, change, release, capacity, and continuous improvement. Escalation, Management, and communication of major production incidents Liaising with development teams on new application handover and 3rd line escalation of issues Application rollout activities (may include some weekend activities) Manage SLO for Faster Resolution and Fewer Incident for the Production Application Stability Develop a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability by understanding emerging trends and proactively addressing them. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Update the RUN Book and KEDB as and when required. Your skills and experience Good experience in Production Application Support and ITIL Practices Very good hands-on knowledge of databases (Oracle/PLSQL etc.), including working experience of writing SQL scripts and queries. Very Good hands-on experience on UNIX/Linux, Solaris, Java J2EE, Python, PowerShell scripts, tools for automation (RPA, Workload, Batch) Exposure in Kaka, Kubernetes and microservices is added advantage. Experience in application performance monitoring tools Geneos, Splunk, Grafana & New Relic, Scheduling Tools (Control-M) Excellent Team player and People Management experience is an advantage. Bachelor's degree. Master's degree a plus. Previous relevant experience in Banking Domain 6+ years experience in IT in large corporate environments, specifically in the production support. Operating systems (e.g. UNIX, Windows) Understanding on environments - Middleware (e.g.MQ, WebLogic, Tomcat, Jboss, Apache, Kafka etc ) - Database environments (e.g. Oracle, MS-SQL, Sybase, No SQL) Experience in APM Tools like Splunk & Geneos; Control-M /Autosys; App dynamics. Nice to have Cloud servicesGCP Exposure on Payments domain fundamentals & SWIFT message types Knowledge in Udeploy, Bit bucket Skills That Will Help You Excel Self-motivated with excellent interpersonal, presentation, and communication skills. Able to think strategically with strong analytical and problem-solving skills. Able to handle multiple demands and priorities simultaneously, work under pressure, in an organized manner and with teams across multiple locations and time-zones. Able to connect, manage & influence people from different backgrounds and cultures. A strong team player being part of a global team, communicating, managing, and cooperating closely on a global level while being able to take ownership and deliver independently. How well support you

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

Pune

Work from Office

: Job TitleQA Engineer LocationPune, India Corporate TitleAssistant Vice President Role Description Technology serves as the foundation of our entire organization. Our Technology, Data, and Innovation (TDI) strategy aims to enhance engineering capabilities, implement an agile delivery framework, and modernize the bank's IT infrastructure. We are committed to investing in and developing a team of forward-thinking technology professionals, offering them the training, autonomy, and opportunities necessary to engage in groundbreaking work. As a QA lead Engineer, you will oversee the comprehensive delivery of engineering solutions to meet business objectives. You will have deep expertise in Hadoop ecosystem administration, scripting, and managing complex, large-scale migration projects. We are seeking a highly skilled Senior Test/QA Engineer to lead the quality assurance and testing efforts for a large-scale migration from Cloudera Distribution Hadoop (CDH) or Cloudera Data Platform (CDP) to Apache Hadoop. The ideal candidate will have extensive experience in end-to-end testing, automation, and developing comprehensive test suites for the Hadoop ecosystem. This role requires hands-on expertise in validating complex migrations, ensuring data integrity, performance, and system reliability while collaborating with cross-functional teams. You will have the opportunity to collaborate closely with clients while being part of a larger, creative, and innovative team dedicated to making a significant impact. In RiskFinder, you will join a local team, collaborating with other teams both in your area and in different geographical locations. You will have the chance to architect, design, and implement an open-source big data platform, enabling clients to produce, access, and analyze extensive datasets through our custom components and applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Develop robust architectures and designs for big data platform and applications within the Apache Hadoop ecosystem Implement and deploy big data platform and solutions on-premises and in hybrid cloud environments. Develop and execute migration scripts, workflows and automation tools to streamline the transition process. Read, understand, and modify open-source code to implement bug fixes and perform upgrades. Security ArchitectureEnsure all solutions adhere to security best practices and compliance requirements. Design, develop, and execute E2E test plans to validate the migration of data, applications, and workflows from Cloudera to Apache Hadoop. Test Hadoop ecosystem components (HDFS, YARN, Hive, Spark, Kafka, Oozie, etc.) to ensure functionality, performance, and compatibility post-migration. Validate data integrity, consistency, and accuracy across source and target systems during migration. Develop and maintain automated test scripts using tools like Python, Ansible, Shell, Java, or Scala to validate Hadoop cluster configurations, data pipelines, and workflows. Implement automation frameworks for regression, performance, and scalability testing of the Apache Hadoop environment. Integrate automated tests into CI/CD pipelines using tools like Jenkins, GitLab CI, or similar. Build comprehensive test suites to cover functional, non-functional, security, and performance aspects of the Hadoop ecosystem. Create test cases for edge cases, failure scenarios, and high-availability configurations (e.g., Kerberos, Ranger, failover) Your skills and experience Proven experience in architecting, designing, building, and deploying big data platforms and applications using the Apache Hadoop ecosystem in hybrid cloud and private cloud scenarios. Experience with hybrid cloud big data platform designs and deployments, especially in AWS, Azure, or GCP. QA/testing, with at least 10 years focused on big data platforms and the Hadoop ecosystem. Proven experience testing large-scale Hadoop migrations (e.g., Cloudera to Apache Hadoop or similar). Hands-on experience validating petabyte-scale data systems in production environments Experience in large-scale data platform builds and application migrations. Expert knowledge of Apache Hadoop ecosystem and associated Apache projects (e.g., HDFS, Hive, HBase, Spark, Kafka, Yarn etc.). Strong programming/scripting skills in Python, Java, Shell, Ansible, Scala, or Bash for test automation. Proficiency in test automation frameworks (e.g., Selenium, TestNG, JUnit, or custom frameworks for big data). Experience with performance testing tools (e.g., JMeter, Gatling) and monitoring tools (e.g., Grafana, Prometheus). Familiarity with security testing (e.g., Kerberos, Ranger, LDAP) and cluster management tools (e.g., Ambari). Knowledge of CI/CD pipelines and tools like Jenkins, Git, or GitLab CI. Experience with version upgrades of technology stacks. Experience testing cloud-based Hadoop deployments (e.g., AWS EMR, Azure HDInsight, GCP Dataproc). Familiarity with containerized environments (e.g., Docker, Kubernetes) for testing Hadoop deployments. Excellent problem-solving and analytical skills. Strong communication and collaboration skills to work effectively with global teams. Ability to work independently and take initiative. Contributions to open-source projects. How well support you

Posted 1 month ago

Apply

2.0 - 7.0 years

11 - 15 Lacs

Pune

Work from Office

: Job TitleL2 Lead Technical Application Support, Associate LocationPune, India Role Description Our organization within Deutsche Bank is Compliance Production Services. We are responsible for providing technical L2 application support for business applications. The Compliance line of business has a current portfolio of 20 applications. The organization is in process of transforming itself using Google Cloud and many new technology offerings. As a L2 Lead Technical Application Support You will provide technical hands-on oversight to several support teams and be actively involved in technical issues resolution across multiple applications. You will also be application working as application lead and will be responsible for technical & operational processes for all application you support. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Act as application lead , You need to own the responsibilities related technical, process, operational, and people for all applications supported. Provide technical hands-on oversight to several support teams and be actively involved in technical issues across multiple applications. Build up technical subject matter expertise on the applications being supported including business flows, application architecture, and hardware configuration. Maintain documentation, knowledge articles, and runbooks. Assist in the process to approve application code releases change tickets as well as tasks assigned to support to perform. Build and maintain effective and productive relationships with the stakeholders in business, development, infrastructure, and third-party systems / data providers & vendors. Assist in special projects and view them as opportunities to enhance your skillset and develop your growth. These projects can include coding using shell scripting, Python and YAML language for support functions. Your skills and experience Minimum 2 years of experience in providing the hands-on IT support and interacting with applications and end users. Engineering Degree/Post graduation from an accredited college or university with a concentration in Computer Science or IT-related discipline. knowledgeable in cloud products like Google Cloud Platform (GCP) and hybrid applications. Strong understanding of ITIL /SRE/ DEVOPS best practices for supporting a production environment. Working knowledge of Elastic Search, WebLogic, Tomcat, OpenShift, Grafana, and Prometheus, Google Cloud Monitoring. Understanding of Java (J2SE), spring, Hibernate, micro services. Red Hat Enterprise Linux (RHEL) professional skill in searching logs, process commands, start/stop processes, use of OS commands to aid in tasks needed to resolve or investigate issues. Shell scripting knowledge a plus. Understanding of database concepts and exposure in working with oracle and SQL databases. Skills That Will Help You Excel Strong written and oral communication skills, including the ability to communicate technical information to a non-technical audience and good analytical and problem-solving skills. Able to train, coach, and mentor and know where each technique is best applied. Confident working with several programming languages, tools, and technologies, including Infrastructure as Code, with the ability to guide colleagues as to the context where each is useful (preferably Python and Terraform) . Experience with GCP or another public cloud provider to build applications. Experience in an investment bank, financial institution or large corporation using enterprise hardware and software. How well support you . . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

5.0 - 10.0 years

35 - 40 Lacs

Pune

Work from Office

: Job TitleServiceNow SRE Support Consultant LocationPune, India Corporate TitleAVP Role Description We are seeking a ServiceNow SRE Support Consultant with 5+ years of experience to ensure the stability, scalability, and reliability of our ServiceNow platform . This role will focus on monitoring, troubleshooting, automation, and performance optimization of the ServiceNow environment while applying SRE and DevOps best practices . The ideal candidate should have a strong background in ServiceNow administration, UNIX/Linux, Windows Server, and cloud infrastructure . What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities ServiceNow Platform Support: Monitor and maintain the health, availability, and performance of the ServiceNow platform. Troubleshoot ServiceNow infrastructure issues , including instance performance, integrations, and database bottlenecks. Collaborate with developers and architects to enhance system stability and optimize workflows. Perform ServiceNow instance upgrades, patching, cloning, and configuration tuning . Incident Management & Troubleshooting: Handle incident response and escalations to resolve ServiceNow-related issues efficiently. Conduct root cause analysis (RCA) and implement long-term fixes. Define and maintain SLOs/SLIs to measure and improve system reliability. Work closely with the ServiceNow support team and infrastructure teams to minimize downtime. System Administration & Automation: Manage ServiceNow MID Servers and integrations with third-party systems. Administer UNIX/Linux, and Windows Server environments supporting ServiceNow. Automate routine administration tasks using PowerShell, Bash, Python, or Ansible . Monitoring & Performance Optimization: Implement and maintain proactive monitoring solutions using Splunk, ELK, Prometheus, Grafana, or ServiceNow Event Management . Optimize ServiceNow database performance, query execution, and API integrations . Perform capacity planning and performance tuning to ensure seamless scalability. Your skills and experience Technical Expertise: 5+ years of experience in ServiceNow administration, support, or SRE roles . Strong knowledge of ServiceNow architecture, modules, and instance management . Experience in UNIX/Linux, and Windows Server administration . Hands-on expertise in scripting (Bash, PowerShell, Python) and automation tools (Ansible, Terraform, Jenkins, Git) . Proficiency in networking (DNS, TCP/IP, Load Balancing, Firewalls) related to ServiceNow. ServiceNow Knowledge: Experience with ServiceNow ITOM (Discovery, Event Management, CMDB, Performance Analytics) . Understanding of ServiceNow upgrades, patching, and instance cloning . Basic knowledge of ServiceNow scripting (JavaScript, REST API, Web Services) . Soft Skills & Collaboration: Strong problem-solving and analytical skills . Ability to work in a fast-paced environment and manage multiple tasks. Excellent communication skills to collaborate with cross-functional teams. Proactive mindset with a focus on automation and continuous improvement . Preferred Qualifications: ServiceNow Certified System Administrator (CSA) or ITOM certifications. Linux (RHCSA, RHCE) or Windows (MCSA, MCSE) certifications. Cloud certifications (AWS, Azure, GCP) are a plus. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

10.0 - 15.0 years

14 - 19 Lacs

Pune

Work from Office

: Job Title Principal Engineer Location Pune As a principal software engineer, you will be responsible for designing, developing, and maintaining core parts our software and infrastructure, contributing heavily to the codebase and collaboration with engineers at all levels. You will play a pivotal role in the shaping of our architecture, ensuring robustness of our systems, and mentoring junior engineers to help them elevate their skills. This role is ideal for someone who enjoys working on challenging technical problems, has a deep understanding of modern technology trends, and is passionate about software craftsmanship. This is purely a technical position with no people management responsibilities. What well offer you 100% reimbursement under child care assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Key responsibilities Design, develop and maintain high-performance, scalable software in Java, Kotlin, Contribute actively to the codebase, ensuring quality, performance, and reliability, Develop solutions using MongoDB and work on optimization, indexing and queries, Architect and implement micro-services deployed in GKE, Ensure compliance with security regulations Review and update policies relevant to internal systems and equipment Mentor and guide engineers across multiple teams, setting standard for technical excellence, Collaborate with product managers, architects, and cross-functional teams to translate business requirements into technical solutions Qualification: 10+ years of professional software development experience e, with expertise in java, Strong experience with MongoDB and working with Data-intensive applications Experience with modern software engineering practices, including test-driven development, continuous integration, and agile methodologies. Solid hands-on experience with Kubernetes, Experience designing and running systems at scale in cloud environments, preferably GCP, Familiarity with CI/CD tools, monitoring logging and alerting stacks (e.g. Prometheus, Grafana, ELK) Strong experience with reactive or event-driven architectures, Experience with infra as code tooling, e.g. Terraform. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

4.0 - 9.0 years

14 - 19 Lacs

Pune

Work from Office

: Job Title Principal Engineer Location Pune As a principal software engineer, you will be responsible for designing, developing, and maintaining core parts our software and infrastructure, contributing heavily to the codebase and collaboration with engineers at all levels. You will play a pivotal role in the shaping of our architecture, ensuring robustness of our systems, and mentoring junior engineers to help them elevate their skills. This role is ideal for someone who enjoys working on challenging technical problems, has a deep understanding of modern technology trends, and is passionate about software craftsmanship. This is purely a technical position with no people management responsibilities. What well offer you 100% reimbursement under child care assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Key responsibilities: Design, develop and maintain high-performance, scalable software in Java, Kotlin, Contribute actively to the codebase, ensuring quality, performance, and reliability, Develop solutions using MongoDB and work on optimization, indexing and queries, Architect and implement micro-services deployed in GKE, Ensure compliance with security regulations Review and update policies relevant to internal systems and equipment Mentor and guide engineers across multiple teams, setting standard for technical excellence, Collaborate with product managers, architects, and cross-functional teams to translate business requirements into technical solutions Qualification: 10+ years of professional software development experience e, with expertise in java, Strong experience with MongoDB and working with Data-intensive applications Experience with modern software engineering practices, including test-driven development, continuous integration, and agile methodologies. Solid hands-on experience with Kubernetes, Experience designing and running systems at scale in cloud environments, preferably GCP, Familiarity with CI/CD tools, monitoring logging and alerting stacks (e.g. Prometheus, Grafana, ELK) Strong experience with reactive or event-driven architectures, Experience with infra as code tooling, e.g. Terraform. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

15.0 - 20.0 years

45 - 50 Lacs

Pune

Work from Office

: Job Title Solution Architect, VP LocationPune, India Corporate TitleVP Role Description Your role will be an Individual contributor in the team. You will be closely working with team comprising of engineers, Lead, functional analysts, and test lead. The team is responsible for developing and implementing micro-services, Front end Application development & enhancements, integrating another partner and client integrations. As a Solution architect you are expected to give the team be hands on with software development, contribute towards good software design and test developed software. You will also be engaged in peer code reviews, document design decisions, and components APIs. You will be participating in daily stand up meetings, analysing software defects and fixing them in a timely manner, and working closely with the Functional Analysis and Quality Assurance teams. As/when required to, you are also expected to train other team members to bring them up to speed. Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities As a Solution Architect, you will be responsible for creating the Architecture and Design of the applications on the Cloud Platform. You are expected to have practical architecture depth to meet the business and technical requirements and expected to have hands on Engineering Experience. You are expected to create the Design and Architecture blueprint for the multi-region, highly available applications on Google Cloud Platform. You will be responsible for the design and implementation of various non-functional requirements like Security, Scalability, Observability, Disaster Recovery, Data Protection. You will provide the technical leadership to the Engineering teams and deliver the application releases. You will work with the SRE and support teams to help bring Architectural improvements. Your skills and experience Expert level Cloud Architecture experience at the Solution Design and implementation level. Overall 15+ years of hands-on coding and engineering experience with at least 5 years of experience in designing and building applications on Cloud Platforms. Cloud Certifications for GCP (Preferred) or AWS or Azure. Well versed with the Well Architected Framework pillars like Security, Availability, Reliability, Operational Excellence, Cost Optimization. Hands on experience in Cloud Services like Kubernetes, API gateways, Load Balancers, Cloud Storage Services, VPCs, NAT Gateways, Cloud SQL databases, VMs and compute services like Cloud Run. Hands on development experience in developing applications using Core Java, Sprint Boot, REST APIs, Databased like Oracle, MongoDB, Apache Kafka. Good knowledge about Frontend technologies like JavaScript, React.js, TypeScript. Experience in designing multi-region Disaster Recovery (DR) solutions and achieving the Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO). Experience in building highly available, low latency and high-volume applications, performance testing and tuning. Good knowledge about Microservices architecture. Working knowledge DevOps tools like Jenkins/ GitHub actions, Terraform, Helm Chart. Experience in building application Observability using tools like Prometheus/Grafana, New Relic and creating SLO dashboards. Good understanding of security principle like encryption techniques, handling security vulnerabilities, and building solutions to prevent DDoS attacks. Nice to have skills. FunctionalPayment Industry overview, Payment processing, Real-time payments processing Shell Scripting is nice to have Change management process exposure Software and infra production promotion experience Test Automation Frameworks Moderate coding skills on Python. Experience in distributed system development. Cross-platform development in several CPU/operating system environments and network protocols. Demonstrated expertise in problem-solving and technical innovation Data Structures, Algorithms and Design Patterns Data stores, persistence, caching (Oracle, MongoDB, Cassandra, and Hadoop tools, memcache etc) How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

: Job TitleTechnology Service Analyst, AS LocationBangalore, India Role Description You will be operating within Production services team of Trade Finance and Lending domain which is a subdivision of Corporate Bank Production Services as a Production Support Engineer. In this role, you will be accountable for the following: to resolve user request supports, troubleshooting functional, application, and infrastructure incidents in the production environment. work on identified initiatives to automate manual work, application and infrastructure monitoring improvements and platform hygiene. Eyes on glass monitoring of services and batch. Preparing and fulfilling data requests. Participation in incident, change and problem management meetings as required. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Provide hands on technical support for a suite of applications/platforms within Deutsche Bank Build up technical subject matter expertise on the applications/platforms being supported including business flows, the application architecture and the hardware configuration. Resolve service requests submitted by the application end users to the best of L2 ability and escalate any issues that cannot be resolved to L3. Conduct real time monitoring to ensure application SLAs are achieved and maximum application availability (up time). Ensure all knowledge is documented and that support runbooks and knowledge articles are kept up to date. Approach support with a proactive attitude, working to improve the environment before issues occur. Update the RUN Book and KEDB as & when required. Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Your skills and experience Must Have : Programming Language - Java Operating systems -UNIX, Windows and the underlying infrastructure environments. Middleware - (e.g. MQ, Kafka or similar) WebLogic, Webserver environment - Apache, Tomcat Database - Oracle, MS-SQL, Sybase, No SQL Batch Monitoring - Control-M /Autosys Scripting - UNIX shell and PowerShell, PERL, Python Monitoring Tools Geneos or App Dynamics or Dynatrace or Grafana ITIL Service Management framework such as Incident, Problem, and Change processes. Preferably knowledge and experience on GCP. Nice to Have 3+ years of experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Good analytical and problem-solving skills ITIL / best practice service context. ITIL foundation is plus. Ticketing Tool experience Service Desk, Service Now. Understanding of SRE concepts (SLA, SLOs, SLIs) Knowledge and development experience in Ansible automation. Working knowledge of one cloud platform (AWS or GCP). Excellent communication skills, both written and verbal, with attention to detail. Ability to work in virtual teams and in matrix structures. How well support you . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. (Pyramid overview) Network Security Monitoring (NSM) Position About Network Security Monitoring: Target's Network Security Monitoring (NSM) team builds and maintains a fleet of over 2000 network sensors across the globe, providing network visibility and advanced monitoring capabilities to our Cyber Defense organization. We build scalable and maintainable infrastructure with full end-to-end ownership of both the hardware and software lifecycle. Our work enables timely detection and response of adversaries by delivering reliable network visibility through a resilient sensor grid and advanced monitoring capability. Team Overview NSM team members regularly: - Collaborate with Networking partners on network design and network sensor placement - Build, deploy, and upgrade network sensors (servers) globally - Design and implement network traffic analysis solutions using engines like Zeek and Suricata - Leverage Salt for configuration management, deployment automation, and infrastructure-as-code implementation - Partner with Cyber Defense to build network-based detections and consult in response scenarios - Develop performance monitoring solutions to track data quality and sensor health to ensure grid health and data fidelity Position Overview Expect to: - Configure, troubleshoot, and optimize network sensors across diverse environments - Debug complex networking issues and perform packet-level analysis to ensure proper traffic visibility. - Build and maintain Salt-based automation for configuration management and deployment. - Analyze monitoring data to identify system improvements and validate detection coverage. - Develop and automate testing to ensure results and outcomes are as expected. - Participate in on-call rotations to support the global sensor grid and respond to critical issues. - Collaborate cross-functionally with teams throughout Cyber Defense and IT - Document operational procedures for sensor management best practices - Research new network security monitoring technologies and evaluate their potential implementation. - Contribute to capacity planning and architectural design of monitoring infrastructure. - Manage and maintain Linux/Unix-based systems that host Zeek sensors, ensuring high availability, performance, and security. - Perform OS-level troubleshooting, patching, and hardening of sensor infrastructure. - Automate server provisioning and configuration using tools like Salt, shell scripting, and Python. - Monitor system logs and metrics to proactively identify and resolve issues affecting sensor performance. About you: - Bachelor's degree in Networking, Computer Science, or related field (or equivalent experience). - 4+ years of experience in network administration, network security, or related roles, with a deep knowledge of network protocols and packet analysis. - Experience with network security monitoring tools, including Zeek and Suricata. - Strong foundation in automation and infrastructure as code, Salt experience preferred. - You understand CI/CD principles and can implement pipelines for testing and deploying code and configuration changes. - Proficient in Linux/Unix systems administration, including shell scripting, system tuning, and troubleshooting. - Hands-on experience managing server infrastructure in production environments, including patching, upgrades, and performance tuning. - Practical experience with packet capture technologies and traffic analysis tools. - Proven ability to troubleshoot complex distributed systems and methodically diagnose network issues. - You appreciate the importance of dev/prod parity and can design for consistent environments across dev and prod. - Experience writing custom detection rules and understanding their performance implications. - Familiarity with technologies such as Zabbix, Prometheus, Nagios, Grafana, Elastic, Kibana

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Role Purpose / Roles & Responsibilities (in Detail) We are looking for a good Python Developer responsible for managing DEVOPS and infrastructure for AIML space Your primary focus will be setting the tech-stack and dependency software deployment, GPU infra configuration and commissioning, build container applications Build and Automate MDLC lifecycle using ADO Manage the BAU and operational issues and queries to ensure the component deployment is stable and operational in lower and production environment Managed and setup alerting across environments Monitor application performance and look for opportunities to improve by infra tuning and scaling. Investigate and close security issues and observation during pipeline promotion Good knowledge in Jenkins Azure Dev ops or CI/CD Vx-Pipeline In-depth knowledge in configuration and automation tools like RunDeck and Ansible Good scripting knowledge using Python, Groovy, Shell Good knowledge in Micro service architecture and containerized platforms like Docker Good knowledge in version and source controls tools like ADP repo, Git, GitHub, bitbucket etc. Good understanding and implementation skills of monitoring tools like Alert manager, Grafana Good knowledge in container orchestrated platforms like OpenShift, Kubernetes Good knowledge in Kubernetes features, behaviour, resource management, administration, deployment and automation using Helm charts Good knowledge in Linux platforms and its variants Good knowledge in NGINX and other webservices Good knowledge in code quality tools like SonarQube Good knowledge in networking basics

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Role Purpose Primary SkillAzure DevOps, Jenkins, CI/CD Vx-Pipeline, RunDeck, Ansible, Python, Groovy, Shell, Linux, NGINX, Kubeflow/ Mlflow Secondary SkillADP, Git/GitHub/Bitbucket, Alert manager, Grafana, OpenShift, Kubernetes, GitHub/Bitbucket Your primary focus will be setting the tech-stack and dependency software deployment, GPU infra configuration and commissioning, build container applications Build and Automate MDLC lifecycle using ADO Manage the BAU and operational issues and queries to ensure the component deployment is stable and operational in lower and production environment Managed and setup alerting across environments Monitor application performance and look for opportunities to improve by infra tuning and scaling. Investigate and close security issues and observation during pipeline promotion Good knowledge in Jenkins Azure Dev ops or CI/CD Vx-Pipeline Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Mandatory Skills: Python, APIs, AI/ML, CI/CD and Kubernetes Model Deployment & TestingDeploy AI/ML models to development and production environments, ensuring functionality and performance. Model OptimizationEvaluate and size models appropriately to optimize performance and resource utilization. Custom Framework DevelopmentBuild custom frameworks for models that are not supported by existing deployment frameworks. Error Debugging & MonitoringIdentify and troubleshoot 4xx/5xx errors using Splunk and database logs. Performance MonitoringUtilize Grafana to monitor GPU usage, serving metrics, and request processing performance. Customer SupportCollaborate with customers to resolve serving and latency issues. CI/CD & Kubernetes: o Develop and manage CI/CD pipelines for automated deployments. o Deploy models in Kubernetes environments. Storage & InfrastructureCreate and manage Persistent Volume Claims (PVCs) for efficient storage management. Microservices ArchitectureWork within a microservices-based environment, ensuring seamless integration and scalability. Automation & Framework DevelopmentAutomate processes and develop new frameworks to improve deployment efficiency. Performance TestingConduct performance testing to ensure models meet latency and throughput requirements. Team Leadership & Individual ContributionCapable of leading a team while also working independently on critical tasks Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Python for Data Science. Experience5-8 Years.

Posted 1 month ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Mumbai

Work from Office

DevOps Engineer (2–8 yrs) – Mumbai. Experience in CI/CD (Jenkins/GitLab), Docker, Kubernetes, Terraform/Ansible, AWS/Azure, Git, and monitoring tools (Prometheus/Grafana). Strong scripting & DevOps practices.

Posted 1 month ago

Apply

7.0 - 9.0 years

19 - 25 Lacs

Pune

Work from Office

Location: Pune Experience: 7 - 9 Years Notice Period: Immediate to 15 Days Overview We are looking for an experienced IT Operations (Monitoring & Observability) Consultant to design, implement, and optimize end-to-end observability solutions. The ideal candidate will have a strong background in monitoring frameworks, ITSM integrations, and AIOps tools to drive system reliability, performance, and proactive incident management. Key Responsibilities Design and deploy comprehensive monitoring and observability architectures for infrastructure, applications, and networks. Implement tools like Prometheus, Grafana, OpsRamp, Dynatrace, New Relic for system performance monitoring. Integrate monitoring systems with ITSM platforms (e.g., ServiceNow, BMC Remedy). Develop dashboards, alerts, and reports to enable real-time performance insights. Architect solutions for hybrid and multi-cloud environments. Automate alerting, remediation, and reporting to streamline operations. Apply AIOps and ML for anomaly detection and predictive insights. Collaborate with DevOps, infra, and app teams to embed monitoring into CI/CD. Document architectures, procedures, and operational playbooks. Required Skills Hands-on experience with observability tools: Prometheus, Grafana, ELK Stack, Fluentd, Dynatrace, New Relic, OpsRamp . Strong scripting knowledge in Python, Ansible . Familiar with tracing tools (e.g., Jaeger, Zipkin ) and REST API integrations . Working knowledge of AIOps concepts and predictive monitoring. Solid understanding of ITIL processes and service management frameworks . Familiarity with security monitoring and compliance considerations. Excellent analytical, troubleshooting, and documentation skills.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies