Jobs
Interviews

1633 Grafana Jobs - Page 39

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. The duties of a Site Reliability Engineer will be to support and maintain various Cloud Infrastructure Technology Tools in our hosted production/DR environments. He/she will be the subject matter expert for specific tool(s) or monitoring solution(s). Will be responsible for testing, verifying and implementing upgrades, patches and implementations. He/She will also partner with the other service and/or service functions to investigate and/or improve monitoring solutions. May mentor one or more tools team members or provide training to other cross functional teams as required. May motivate, develop, and manage performance of individuals and teams while on shift. May be assigned to produces regular and adhoc management reports in a timely manner. Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure.Bachelor's Degree in information systems or Computer Science or related discipline with relevant experience of 5-8 years Proficient in Splunk/ELK, and Datadog. Experience with Enterprise Software Implementations for Large Scale Organizations Exhibit extensive experience about the new technology trends prevalent in the market like SaaS, Cloud, Hosting Services and Application Management Service Monitoring tools like Grafana, Prometheus, Datadog, Experience in deployment of application & infrastructure clusters within a Public Cloud environment utilizing a Cloud Management Platform Professional and positive with outstanding customer-facing practices "Can-do" attitude, willing to go the extra mile Consistently follows-up and follows-through on delegated tasks and actions Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! in the Application and Interview Process UKGCareers@ukg.com

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Pune

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. Site Reliability Engineers at UKG are team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Site Reliability Engineers must have a passion for learning and evolving with current technology trends. They strive to innovate and are relentless in their pursuit of a flawless customer experience. They have an "automate everything" mindset, helping us bring value to our customers by deploying services with incredible speed, consistency and availability. Primary/Essential Duties and Key Responsibilities: Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure. Engage in and improve the lifecycle of services from conception to EOL, includingsystem design consulting, and capacity planning Define and implement standards and best practices related toSystem Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response. Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Identify and eliminate operational toil by treating operational challenges as a software engineering problem Actively participate in incident response, including on-call responsibilities Partner with stakeholders to influence and help drive the best possible technical and business outcomes Guide junior team members and serve as a champion for Site Reliability Engineering Engineering degree, or a related technical discipline, and 10+years of experience in SRE. Experience coding in higher-level languages (e.g., Python, Javascript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Ability to analyze current technology utilized and engineering practices within the company and develop steps and processes to improve and expand upon them Working experience with industry standards like Terraform, Ansible. (Experience, Education, Certification, License and Training) Must have hands-on experience working within Engineering or Cloud. Experience with public cloud platforms (e.g. GCP, AWS, Azure) Experience in configuration and maintenance of applications & systems infrastructure. Experience with distributed system design and architecture Experience building and managing CI/CD Pipelines Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! Disability Accommodation UKGCareers@ukg.com

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Noida

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. The duties of a Site Reliability Engineer will be to support and maintain various Cloud Infrastructure Technology Tools in our hosted production/DR environments. He/she will be the subject matter expert for specific tool(s) or monitoring solution(s). Will be responsible for testing, verifying and implementing upgrades, patches and implementations. He/She will also partner with the other service and/or service functions to investigate and/or improve monitoring solutions. May mentor one or more tools team members or provide training to other cross functional teams as required. May motivate, develop, and manage performance of individuals and teams while on shift. May be assigned to produces regular and adhoc management reports in a timely manner. Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure.Bachelor's Degree in information systems or Computer Science or related discipline with relevant experience of 5-8 years Proficient in Splunk/ELK, and Datadog. Experience with Enterprise Software Implementations for Large Scale Organizations Exhibit extensive experience about the new technology trends prevalent in the market like SaaS, Cloud, Hosting Services and Application Management Service Monitoring tools like Grafana, Prometheus, Datadog, Experience in deployment of application & infrastructure clusters within a Public Cloud environment utilizing a Cloud Management Platform Professional and positive with outstanding customer-facing practices "Can-do" attitude, willing to go the extra mile Consistently follows-up and follows-through on delegated tasks and actions Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! in the Application and Interview Process UKGCareers@ukg.com

Posted 1 month ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Bengaluru

Work from Office

In this R&D Architect role, youll lead architecture and subsystem design from the early stages, ensuring systems are robust, scalable, and performance-driven. You'll collaborate closely with cross-functional stakeholders to capture both functional and non-functional requirements, and translate business goals into innovative, cloud-native solutions. As a tech visionary, you'll recommend emerging technologies to boost product capabilities and design integrated software-hardware systems with a focus on compatibility and high performance. You'll drive critical design reviews, assess risks, validate technical choices, and act as a trusted advisor throughout the development lifecycle. Your guidance will empower development teams through mentorship and ensure alignment with architectural best practices. You Have: Bachelors or Master's Degree in Engineering (or equivalent degree) with minimum 12 years and 8+ years in solution design and software development Proven expertise in system architecture, cloud-native design, and microservices development using Java, Spring Boot, and containerization (Docker, Kubernetes) Hands-on experience with streaming technologies (Flink, Spark, or Storm), VNF-based applications on VMware/OpenStack, and tools like HELM, Minikube, Swagger Skilled in automation and CI/CD pipelines using Jenkins, Git, and monitoring/logging solutions including Prometheus, Grafana, and the EFK stack Strong track record of coaching engineers in agile environments and driving end-to-end technical excellence across complex, scalable systems It would be nice if you also had: Hands-on experience in installing and integrating products in complex, multi-vendor environments Expertise in DevOps practices and end-to-end deployment automation Lead architecture and subsystem design in early product phases, translating business goals and customer needs into scalable, cloud-native solutions. Collaborate with stakeholders to define and manage functional and non-functional requirements, ensuring alignment with overall product vision. Recommend and integrate emerging technologies to boost product capabilities while designing robust software/hardware systems for performance and compatibility. Drive design reviews, risk assessments, and technical validations while mentoring development teams and guiding architectural best practices

Posted 1 month ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Bengaluru

Work from Office

Were looking for a hands-on architect to design, deploy, and manage Kubernetes clusters, ensuring high availability and performance. Youll lead the full lifecycle management of databasesautomating installs, upgrades, backups, and decommissionswhile actively contributing to open-source communities. This role involves driving security excellence by analyzing and remediating vulnerabilities (CVEs), conducting in-depth assessments using tools like Burp Suite and Anchore, and ensuring compliance with industry standards. Youll optimize workloads for resilience, troubleshoot complex issues across OS, containers, and databases, and deliver production-ready solutions. Strong debugging, observability, and collaboration skills are essential. You have: Bachelor's or Master's Engineering degree or equivalent with Over 12 years of experience in databases and Kubernetes with deep expertise in architecture, automation, and secure deployments; expert in MariaDB, Cassandra, and Redis, including tuning and troubleshooting in production. Strong programming skills in Python for automation and tooling, with hands-on experience in containerized environments using Docker, Kubernetes, Helm charts, and custom Operators. Proven track record in Microservices architecture, container orchestration, virtualization, and DevOps practices, including CI/CD pipeline development and deployment automation. Advanced knowledge of security protocols (TLS, SSH), encryption standards, and secure design principles, with experience in threat modeling, system hardening, and security-by-design methodologies. Skilled in security assessments and tooling, including vulnerability scanning, penetration testing, and robustness/DoS analysis using tools such as Anchore, Tenable, Netsparker, Codenomicon, and Nmap; familiarity with SBOM generation and integration in CI/CD workflows. It would be nice if you also had: Working knowledge of Infrastructure as Code tools like Terraform or Pulumi, along with GitOps workflows Familiarity with Prometheus, Grafana, ELK/EFK stacks, or OpenTelemetry for end-to-end observability, especially for performance tuning and incident response in distributed systems Design, deploy, and manage scalable, highly available MariaDB, Cassandra, and Redis databases within Kubernetes clusters, while continuously optimizing performance and reliability. Automate end-to-end lifecycle management workflowsincluding install, upgrade, backup, recovery, and decommissionwhile contributing technical improvements to open-source communities. Lead the response to security vulnerabilities across database stacks, collaborating with security and engineering teams to analyze, prioritize, and remediate CVEs. Conduct in-depth security assessments using tools like Burp Suite, Anchore, and Codenomicon, and map findings to risk levels to ensure compliance with security standards. Collaborate with cross-functional teams and customers to deliver secure, production-ready database solutions, troubleshoot complex issues across the stack, and stay current with trends in Kubernetes, OSS, and cloud security.

Posted 1 month ago

Apply

5.0 - 9.0 years

15 - 20 Lacs

Pune

Work from Office

Project description We're seeking a strong and creative Software Engineer eager to solve challenging problems of scale and work on cutting edge technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture is one that strives on solving difficult problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. In this new adventure, you will have the opportunity to collaborate with a world-class team in the field of Insurance by building a holistic solution, interacting with multidisciplinary teams. Responsibilities As a Lead OpenTelemetry Developer, you will be responsible for developing and maintaining OpenTelemetry-based solutions. You will work on instrumentation, data collection, and observability tools to ensure seamless integration and monitoring of applications. This role involves writing documentation and promoting best practices around OpenTelemetry. Skills Must have Senior candidates with 8+ years of experience Experience in InstrumentationExpertise in at least one programming language supported by Open Telemetry and a broad knowledge of other languages (e.g., Python, Java, Go, PowerShell, .NET) Passion for ObservabilityStrong interest in observability and experience in writing documentation and blog posts to share knowledge.oExperience in Java instrumentation techniques (e.g. bytecode manipulation, JVM internals, Java agents)Secondary Skills Telemetry Familiarity with tools and technologies on one or more of the belowsuch as Prometheus, Grafana, and other observability platforms (E.g. Dynatrace, AppDynamics (Splunk), Amazon CloudWatch, Azure Monitor, Honeycomb) Nice to have - Other Languages EnglishC2 Proficient Seniority Senior

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Project description Our client is one of the largest UK grocery and general merchandise retailers and is in the middle of transformation to a technology company in retail. As part of this exercise renovation of the technology landscape taking place across the Company. As a partner, we support our client in this journey and help to develop from scratch new applications for various departments (Supply Chain, Product Lifecycle Management, Finance, HR etc.). Our teams are responsible for the development of platform components in an Agile environment together with the client, based on event-based Microservices architecture. As a developer, you will work with a team of professionals in your country and collaborate with experts from all over the world to develop modern high-loaded applications in a cloud environment. Our ideal candidate is a passionate smart individual with a strong engineering background, ready to work in a self-managed team, accept challenges, and take an active role in their resolutions. The successor will not only solve engineering tasks but also take responsibility for the resolution of the technology company's business goals together with the product owner and business stakeholders. Your effort will help our client to meet their passion to satisfy the most demanded client and to become the number one technology company in retail. In return, you will have the ability to grow your technical skills, extend your network and share knowledge with experts all over the world. Responsibilities Part of Support team to resolve the incidents and speak to colleagues in the UK. Collaborate with engineering team to come up with the right fixes to prevent issues where possible. Work with teams to create dashboards and alerts to detect issues proactively and the right tools and runbooks to fix issues faster. To reduce the incoming volume of tickets identify issues proactively and lastly solve the tickets sooner. This is for support and there are 2 shifts between 09:30 AM -10:30 PM and it's a 14*7 rotational shift. Advanced Troubleshooting: Diagnose and resolve complex hardware, software, and network issues escalated from Level 1 support. Problem Analysis: Identify the root cause of technical problems and implement effective solutions. Technical Expertise: Possess a strong understanding of systems, applications, and troubleshooting methodologies. Documentation: Maintain accurate records of incidents, resolutions, and knowledge base articles. Communication: Communicate effectively with end-users, Level 1 support, and other technical teams to ensure clear information flow and collaboration. Escalation: Know when and how to escalate issues to higher-level support (L3) when necessary. Knowledge Sharing: Contribute to the knowledge base and train Level 1 support on new technologies and procedures. Performance Monitoring: Track and analyze system performance metrics to identify potential issues and proactively address them. Proactive Support: Identify and address potential issues before they impact end-users. Skills Must have Years of experience in the role 4+ Hands-on experience with Azure Cloud. Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or Travis CI. Debugging skills, Knowledge of Linux OS and commands, Shell Scripting Understanding of Kubernetes and Docker commands. Splunk Dashboard creations, able to write queries based on the error signature. Graphana. Good communication skills written & verbal. Work in support shifts Nice to have Good to have Prometheus, Ansible, Terraform scripting. Experience working in Agile/Scrum teams, and ability to collaborate with cross-functional teams. Other Languages EnglishC1 Advanced Seniority Regular

Posted 1 month ago

Apply

3.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Project description We've been engaged by a large Australian financial institution to provide resources to manage the production support activities along with their existing team in Sydney & India. Responsibilities Carry out enhancements to maintenance/housekeeping scripts as required and monitor the DB growth periodically. Handles cloud Environment preparation, refresh, rebuild, upkeep, maintenance, and upgrade activities. Ensure cloud cost optimisation. Troubleshooting of Murex environment-specific issues including Infrastructure related issues and update pipelines for a permanent fix. Handling EOD execution and troubleshooting of issues related to it. Participate in analysis, solutioning, and deployment of solution for production issues during EoD. Participate in the release activity and coordinate with QA/Release teams. Participate in AWS stack deployment, AWS AMI patching, and stack configuration to ensure optimal performance and cost-efficiency. Address requests like warehouse rebuild, maintenance, Perform Health/sanity checks, create XVA engine, environment restores & backup in AWS as per project need. Perform Weekend maintenance and perform health checks in the production environment during the weekend. Support working in shifts (max end time will be 12.30 AM IST) and available for weekend & on-call support. Have to work out of client location on a need basis. Flexible to work in a Hybrid model. Skills Must have 4 to 8 Years of experience in Murex Production Support Murex End of Day support Troubleshooting batch-related issues, including date moves and processing adjustments Murex Env Management & Troubleshooting Experienced in SQL Unix shell scripting, Monitoring tools, Web development Experienced in the Release and CI/CD process Linux/Unix server and Oracle RDS knowledge Working experience with automation/job scheduling tools such as Autosys, GitHub Actions Working experience with monitoring tools like Grafana, Splunk, Obstack, PagerDuty Good communication and organization skills working within a DevOps team supporting a wider IT delivery team Nice to have PL/SQL, Scripting languages (Python) Advanced troubleshooting experience with Shell scripting and Python Experience with CICD tools like Git, flows, Ansible, and AWS including CDK Exposure to AWS Cloud environment Willing to learn and obtain AWS certification Other Languages EnglishC1 Advanced Seniority Regular

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 18 Lacs

Chennai

Work from Office

#Employment Type: Contract 1. 6 to 8 Years of exp as DevOps Consultant. 2. 4 to 5 Years of exp in CICD as mandatory. 3. 3 to 4 Years of exp in GIT / AWS. 4. Good Working exp in Prometheus and Grafana.

Posted 1 month ago

Apply

4.0 - 7.0 years

11 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Exp in Java application support, database management & working knowledge of application like Tomcat/WebLogic. Exp in tools like ServiceNow/Jira. Comfortable with UNIX, shell script & observability tools such as Splunk/AppDynamics/ELK/Prometheus.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Hybrid

We are seeking a skilled Database Specialist with strong expertise in Time-Series Databases, specifically Loki for logs, InfluxDB, and Splunk for metrics. The ideal candidate will have a solid background in query languages, Grafana, Alert Manager, and Prometheus. This role involves managing and optimizing time-series databases, ensuring efficient data storage, retrieval, and visualization. Key Responsibilities: Design, implement, and maintain time-series databases using Loki, InfluxDB, and Splunk to store and manage high-velocity time-series data. Develop efficient data ingestion pipelines for time-series data from various sources (e.g., IoT devices, application logs, metrics). Optimize database performance for high write and read throughput, ensuring low latency and high availability. Implement and manage retention policies, downsampling, and data compression strategies to optimize storage and query performance. Collaborate with DevOps and infrastructure teams to deploy and scale time-series databases in cloud or on-premise environments. Build and maintain dashboards and visualization tools (e.g., Grafana) for monitoring and analyzing time-series data. Troubleshoot and resolve issues related to data ingestion, storage, and query performance. Work with development teams to integrate time-series databases into applications and services. Ensure data security, backup, and disaster recovery mechanisms are in place for time-series databases. Stay updated with the latest advancements in time-series database technologies and recommend improvements to existing systems. Key Skills: Strong expertise in Time-Series Databases with Loki (for logs), InfluxDB, and Splunk (for metrics).

Posted 1 month ago

Apply

5.0 - 8.0 years

12 - 16 Lacs

Mangaluru, Udupi

Hybrid

SRE Lead Role Description: We are seeking an experienced SRE Strategist to lead the reliability and operational excellence agenda for our Enterprise Data Platforms spanning GCP cloud-native systems. This strategic leadership role will help instill Google’s SRE principles across diverse data engineering teams, uplift our platform reliability posture, and spearhead the creation of a Centre-of-Excellence (CoE) for SRE. The ideal candidate will possess a deep understanding of modern SRE practices, demonstrate a proven ability to scale SRE capabilities in large enterprises, and evangelise a data-driven approach to resilience engineering. Key Responsibilities: Define and drive SRE strategy for enterprise data platforms on GCP, aligning with business goals and reliability needs. Act as a trusted advisor to platform teams, embedding SRE mindset, best practices, and golden signals into their SDLC and operational processes. Set up and lead a Site Reliability Engineering CoE, delivering reusable tools, runbooks, blueprints, and platform accelerators to scale SRE adoption across the organisation. Partner with product and platform owners to prioritise and structure SRE backlogs, formulate roadmaps, and help teams move from reactive ops to proactive reliability engineering. Define and track SLIs, SLOs, and error budgets across critical data services, enabling data-driven decision making around availability and performance. Drive incident response maturity, including chaos engineering, incident retrospectives, and blameless postmortems. Foster a reliability culture through coaching, workshops, and cross-functional forums. Build strategic relationships across engineering, data governance, security, and architecture teams to ensure reliability is baked in, not bolted on. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or related discipline. 3+ years in SRE leadership or SRE strategy roles. Strong familiarity with Google SRE principles and practical experience applying them in complex enterprise settings. Proven track record in establishing and scaling SRE teams. Experience with GCP services like Cloud Build, GCS, CloudSQL, Cloud Functions, and GCP logging & monitoring. Deep experience with observability stacks such as Prometheus, Grafana, Splunk, and GCP native solutions. Skilled in Infrastructure as Code using tools like Terraform, and working knowledge of automation in CI/CD environments. Key Competencies & Skills: Strong leadership, influence without authority, and mentoring capabilities. Hands-on scripting and automation skills in Python, with secondary languages like Go or Java a plus. Familiarity with incident and problem management frameworks in enterprise environments. Ability to define and execute a platform-wide reliability roadmap in alignment with architectural and business objectives. Nice to Have: Exposure to secrets management tools (e.g., HashiCorp Vault). Experience with tracing and APM tools like Google Cloud Trace or Honeycomb. Background in data governance, data pipelines, and security standards for data products.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Experience: 5+ Years Skill: Site reliability engineer Location: Bangalore Notice Period: Immediate . Employment Type: Contract Working Mode : Hybrid Job Description Site Reliability Engineer Tech Stack Primary AWS Terraform Ansible Docker Secondary Python Bash Github Jenkins

Posted 1 month ago

Apply

9.0 - 14.0 years

25 - 27 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Datamatics is a CMMI Level 5 company. Datamatics, a global Digital Solutions, Technology, and BPM Company, provides intelligent solutions for data-driven businesses to increase productivity and enhance the customer experience. With a completely digital approach, Datamatics portfolio spans across Information Technology Services, Business Process Management, Engineering Services, and Big Data & Analytics all powered by Artificial Intelligence. It has established products in Robotic Process Automation, Intelligent Document Processing, Business Intelligence, and Automatic Fare Collection. Datamatics services global customers across Banking, Financial Services, Insurance, Healthcare, Manufacturing, International Organizations, and Media & Publishing. The Company has a presence across 4 continents with major delivery centers in the USA, India, and Philippines. Job Role - Senior Devops Engineer Location - Mumbai / Bangalore - Hybrid Experience - 9+ Years Note - Candidates we are looking for Immediate to 15 Days candidates. Job Role :- Candidates will have: Good degree in relevant area (Computer Science, Engineering, or equivalent experience). Experience with CI/CD tools (Gitlab CI/CD, Ansible, GitHub Actions, Jenkins, CircleCI etc) and version control systems (Git, SVN, etc). Proven experience as a DevOps Engineer or a similar role, preferably in a fast-paced, agile environment. Strong knowledge of cloud computing platforms (AWS, Azure) and experience with infrastructure-as-code tools (Terraform, CloudFormation, etc.) and working with on-premise data centres. Strong knowledge of supporting high availability enterprise systems (Tomcat, Apache, Keepalived, Haproxy). Proficiency in scripting and automation using languages such as Python, Ruby, or Shell. Familiarity with containerization and orchestration technologies (Docker, Kubernetes, etc.). Solid understanding of networking concepts, security best practices, and system monitoring tools. Strong problem-solving and analytical skills with the ability to troubleshoot complex issues. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Keen eye for documenting processes and knowledge. Current Technologies in Use: Our current infrastructure is built on a robust suite of technologies, which requires in-depth expertise in several areas: Docker & Docker Compose For containerisation and management of microservices. Kubernetes & Helm For orchestration and deployment of applications. AWS Specifically VPC, EKS, ECR, STS, ACM, IAM, CloudWatch, S3, Route53, App Mesh for our cloud infrastructure. Terraform For Infrastructure as Code (IaC) management. GitLab CI/CD, GitHub Actions For managing our continuous integration and delivery pipelines. Prometheus & Grafana For monitoring and observability. Bash, Python For scripting and automation tasks. Ansible automated infrastructure deployment/setup Java, Tomcat, Apache HTTPD For managing our application servers. Linux OSs, DNS, Networking, TCP/IP – Core knowledge for managing AWS VPC and Docker networking. Sonar Cloud – For code quality and security scanning. Interested candidates can drop their resume on bhakti.rajwada@datamatics.com

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 8 Lacs

Mumbai, Nagar, Nehru

Work from Office

Job Summary: We are seeking a dedicated Infrastructure Monitoring Specialist using Zabbix. The ideal candidate will be responsible for monitoring and maintaining our IT infrastructure to ensure optimal performance and availability in an environment with a strong automation focus. This role requires sound technical expertise, excellent problem-solving skills, and the ability to work collaboratively with cross-functional teams in a fast-paced, agile environment. Responsibilities: Install, configure, update, and manage Zabbix central components, and automate these tasks over time. Identify and troubleshoot issues with Zabbix monitoring in real-time to minimize downtime. Provide automation templates and examples of the configuration of endpoints, alerts, baselines, and other monitoring object and activities for the consumption of users. Collaborate with IT teams to implement and automate monitoring solutions and improvements such as baseline definition, aggregation & correlation, alerts, notifications, blackout management, mapping, dashboards, etc in line with defined monitoring strategies Implement, manage, and periodically test backup and recovery of the monitoring tools environment Generate and analyze performance reports to ensure system efficiency and capacity. Provide support and training to team members on monitoring tools and leading practices. Cross train on Cribl streaming software to expand support to Cribl. Required Skills: 2-4 years of experience with Zabbix. Understanding of IT infrastructure and network monitoring. Familiarity with Windows and Linux operating systems Scripting languages (e.g., Python, Bash). Good problem-solving and analytical skills. Ability to work in a fast-paced environment and manage multiple tasks. Excellent communication and interpersonal skills, with the ability to work with technical and non-technical stakeholders Desired Skills: Experience with other monitoring tools (e.g., Nagios, Prometheus, Grafana). Familiarity with related DevOps tools and methodologies, including: Git source control (Azure DevOps, GitHub, GitLab, etc.) CI/CD (Jenkins, Azure DevOps, GitHub Actions, GitLab) Infrastructure as Code, Configuration as Code, etc Knowledge of cloud platforms (e.g., AWS, Azure). Experience with Cribl.

Posted 1 month ago

Apply

12.0 - 17.0 years

15 - 19 Lacs

Bengaluru

Work from Office

YOUR IMPACT: At Debricked, we are passionate about open source. Our mission is to simplify the use of open source for developers, eliminating any associated risks. We started our journey in 2018 as a start-up based in Sweden that got acquired into the Fortify Application Security line of products in 2022, and we are now expanding our team in India with OpenText. If you want to join a team of fun, driven, and supportive colleagues and want the opportunity to shape the journey ahead, Debricked is the place for you WHAT THE ROLE OFFERS : In this role, you will work with extensive datasets covering all aspects of Open Source Software, ranging from vulnerability definitions to GitHub star scores. Your work could involve enhancing our existing products, such as our automation engine, open databases, and analysis tool, or you might be creating an entirely new part of the service. You will significantly influence our team and product. We use the latest tools and technologies, including PHP 8.2 and Symfony 6 for our backend, and Vue for our frontend. As part of our startup culture, each employee enjoys a lot of personal freedom and responsibility, with a significant scope to influence product development. WHAT YOU NEED TO SUCCEED : Minimum 12 + years of experience in PHP development, or some experience in PHP and vast experience in other similar languages such as Java, C++,C# or Go. Experience with an MVC framework such as Symfony or Laravel. Proficiency in Git. Comfortable with SQL (MySQL is what we use, but other SQL flavors are fine). Experience deploying SaaS to cloud platforms such as Amazon Cloud or Google Cloud,Experience with Cloud Services and practices such as Google Cloud,AWS, Docker, Terraform, Kubernetes, RabbitMQ, Grafana, and Prometheus. Develop and document the overall architecture of software systems, including high-level design, component architecture, and data flow diagrams. Develop and implement strategies to mitigate identified risks and ensure that the system meets performance, security, and reliability requirements. Ensure that the architecture and implementation comply with industry standards, best practices, and organizational guidelines. Propose and implement improvements to existing systems and processes, fostering innovation and optimizing system performance. Strong analytical and troubleshooting skills. Ability to communicate complex technical concepts clearly to both technical and non-technical stakeholders. Experience leading or mentoring teams, driving projects, and influencing decisions. ONE LAST THING: You are persistent and inquisitive. You have to understand why things are happening the way they are. You are determined to understand cyber attack techniques at a very detailed level. You are a self-starter who is able to work with minimal management, however have strong collaboration and interpersonal skills to work together with several other professionals from other information security fields. Youre a creative thinker who wants to answer the question, Why? Your workstation is a pyramid of monitors that you can't take your eyes off of at the risk of missing something. You have a desire to learn new technologies. Your sense of humor, passion and enthusiasm shines through in everything you do.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Hyderabad

Work from Office

AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.

Posted 1 month ago

Apply

1.0 - 4.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% DevOps Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for professionals skilled in infrastructure automation, CI/CD pipelines, cloud computing, and monitoring tools. Proficiency in Terraform, Kubernetes, Docker, and cloud platforms is required. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Implement and manage CI/CD pipelines for efficient software deployment. Automate infrastructure provisioning using tools like Terraform, Cloud Formation, or Ansible. Monitor system performance, troubleshoot issues, and ensure high availability and manage cloud environments (AWS, GCP, Azure) for scalability and security. Collaborate with development teams to ensure smooth integration and deployment processes. Required Skills: . Proficiency with CI/CD tools (Jenkins, GitLab CI, Circle CI) and infrastructure automation (Terraform, Ansible). Strong experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Familiarity with version control (Git), system monitoring tools (Prometheus, Grafana) and scripting languages (Python, Bash) for automation. Nice to Have:. Experience with server less architectures and microservices. Knowledge of security best practices and compliance (IAM, encryption). What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: . Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have:. Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

3.0 - 8.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Bengaluru

Work from Office

AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Platform Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for Platform Engineers focusing on building scalable and high-performance AI/ML platforms. Strong background in cloud architecture, distributed systems, Kubernetes, and infrastructure automation is expected. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Architect and maintain scalable cloud infrastructure on AWS, GCP, or Azure using tools like Terraform and Cloud Formation. Design and implement Kubernetes clusters with Helm, Kustomize, and Service Mesh (Istio, Linkerd). Develop CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, and Argo CD for automated deployments. Implement observability solutions (Prometheus, Grafana, ELK stack) for logging, monitoring, and tracing & automate infrastructure provisioning with tools like Ansible, Chef, Puppet, and optimize cloud costs and security. Required Skills: . Expertise in cloud platforms (AWS, GCP, Azure) and infrastructure as code (Terraform, Pulumi) with strong knowledge of Kubernetes, Docker, CI/CD pipelines, and scripting (Bash, Python). Experience with observability tools (Prometheus, Grafana, ELK stack) and security practices (RBAC, IAM). Familiarity with networking (VPC, Load Balancers, DNS) and performance optimization. Nice to Have:. Experience with Chaos Engineering (Gremlin, LitmusChaos), Canary or Blue-Green deployments. Knowledge of multi-cloud environments, FinOps, and cost optimization strategies. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

3.0 - 8.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

6 - 10 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies