Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a frontend developer at Nomiso India, you will be responsible for building a workflow automation system to simplify existing manual processes. Your role will involve owning lifecycle management, automating platform operations, leading issue resolution, defining compliance standards, integrating various tools, driving observability and performance tuning initiatives, as well as mentoring team members and leading operational best practices. You can expect a stimulating and fun work environment at Nomiso, where innovation and thought leadership are highly valued. We provide opportunities for career growth, idea generation, and innovation at all levels of the company. As a part of our team, you will be encouraged to push your boundaries and fulfill your career aspirations. The core tools and technology stack you will be working with include OpenShift, Kubernetes, GitOps, Ansible, Terraform, Prometheus, Grafana, EFK Stack, Vault, SCCs, RBAC, NetworkPolicies, and more. To qualify for this role, you should have a BE/B.Tech or equivalent degree in Computer Science or a related field. The position is based in Delhi-NCR. Join us at Nomiso India and be a part of a dynamic team that thrives on ideas, innovation, and challenges. Your contributions will be valued, and you will have the opportunity to grow professionally in a fast-paced and exciting environment. Let's work together to simplify complex business problems and empower our customers with effective solutions.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
As a Site Reliability Engineer - Incident Management, you will be responsible for monitoring, maintaining, and managing the entire Qualys infrastructure and services installed at different data centers. In the event of any malfunction in products/services, you will be required to monitor, troubleshoot, repair, and restore the service/system promptly to ensure maximum service availability and performance. Your role will also involve providing support services for Engineering and other technical teams, collaborating for quicker issue resolution, performing end-to-end incident management, documentation, and task automation. Your main responsibilities will include monitoring the performance and capacity of computer systems, utilizing various tools to identify and address issues effectively. You will be expected to conduct basic troubleshooting of platform/product issues, utilize tools such as Splunk, Grafana, Kibana for performance checking, and manage PagerDuty. Additionally, you will assist in task automation wherever applicable, ensure timely resolution of incident tickets, and work on triaging and troubleshooting problems affecting products or services. It will be crucial for you to meticulously track and document all issues and resolutions in detail on the ticketing/documentation tools to enhance the knowledge base and maintain a record of system health. In cases where troubleshooting complex issues is not feasible, you should escalate the problem to management, IT resources, or 3rd party vendors for further assistance. Communication within the team and externally to stakeholders, keeping them informed of relevant information, known issues, and steps being taken, will be an integral part of your role. The Site Reliability Engineer - Incident Management team will operate 24*7*365 on a monthly shift rotation basis as per requirements. To excel in this role, you should possess one to two years of IT Operations (Infra/System admin/Linux) experience or relevant certification. Familiarity with monitoring and integration tools like Splunk, Prometheus, Grafana, Kibana, PagerDuty, Runscope, and incident management tools such as Jira/ServiceNow is beneficial. A good understanding of ITSM main functions and tools, along with strong interpersonal skills to interact with employees at all levels professionally, will be essential. Certifications in computer functionality, Linux, System Admin, VMware, IT Security, or ITSM/ITIL, and knowledge of DevOps/SRE basics, Python, and Cloud will be advantageous for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Engineering Manager focusing on the OSS Platform & Infrastructure team, you will be responsible for leading and managing a team of engineers to ensure the successful development and maintenance of the organization's platform. Your role will require a deep understanding and practical experience in various technical domains. You should have hands-on expertise in Infrastructure as Code (IaC), Cloud Platforms, Continuous Integration/Continuous Deployment (CI/CD) Pipelines, Containerization & Orchestration, and Site Reliability Engineering (SRE) principles. Your experience should include working in a product-oriented environment with leadership responsibilities in engineering. In addition, you must demonstrate strong proficiency and practical experience with tools such as Ansible, Terraform, CloudFormation, and Pulumi. Knowledge of resource management frameworks like Apache Mesos, Kubernetes, and Yarn is essential. Expertise in Linux operating systems and experience in monitoring, logging, and observability using tools like Prometheus, Grafana, and ELK stack is also required. Furthermore, your programming skills should encompass at least one high-level language such as Python, Java, or Golang. A solid understanding of architectural and systems design, including scalability and resilience patterns, various databases (RDBMS & NoSQL), and familiarity with multi-cloud and hybrid-cloud architectures is crucial for this role. Additionally, highly valued skills for this position include expertise in Network and infrastructure operational product engineering. Knowledge of Network Protocols such as TCP/IP, UDP, HTTP/HTTPS, DNS, BGP, OSPF, VXLAN, IPSec, and having a CCNA or equivalent certification would be advantageous. Experience in Network Security, Network Automation, zero trust concepts, TLS/SSL, VPNs, and protocols like gNMI, gRPC, and RESTCONF is desirable. Proficiency in Agile Methodologies like Scrum and Kanban, backlog and workflow management, as well as SRE-specific reporting metrics (MTTR, deployment frequency, SLO, etc.), will also be beneficial for excelling in this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
About MSIL: If you have travelled in India, taken a route to anywhere around this great nation, chances are you've driven with us. For close to four decades now, Maruti Suzuki cars have been going places. A Joint Venture Agreement with Suzuki Motor Corporation of Japan in 1982 laid the foundations of Maruti Suzuki that we all see today. Today, Maruti Suzuki alone makes more than 1.8 million Maruti Suzuki family cars every year. That's one car every 10 seconds. We drive up head and shoulders above every major global auto company. We have built our story with a belief in small cars for a big future. The Maruti Suzuki journey has been nothing less than spectacular. We are looking for a Backend Developer to design, build, and maintain a scalable and efficient microservices architecture. The candidate must have experience in developing microservices using modern frameworks and tools. Key Responsibilities: Design and Development: Develop, test, and deploy microservices that are scalable, efficient, and secure. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Architecture and Best Practices: Implement best practices for microservices architecture, including API design, security, and performance optimization. Contribute to the design and implementation of the system architecture. Ensure that the microservices architecture supports high availability and resilience. Continuous Integration and Deployment: Develop and maintain CI/CD pipelines to automate the deployment process. Monitor and manage the deployment of microservices in various environments. Troubleshoot and resolve issues in development, test, and production environments. Collaboration and Communication: Work closely with frontend and backend developers, QA, and DevOps teams. Participate in code reviews, design discussions, and technical documentation. Communicate effectively with team members and stakeholders to ensure successful project delivery. Maintenance and Support: Perform regular maintenance and updates to microservices. Ensure the security and integrity of the microservices. Provide support for production issues and resolve them in a timely manner. Required Skills and Qualifications: Technical Skills: Proficient in one or more programming languages, preferably Java (and related frameworks such as Spring Boot). Strong understanding of microservices architecture and design patterns. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Knowledge of RESTful APIs, gRPC, and messaging systems (e.g., Kafka, RabbitMQ). Familiarity with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Experience with database technologies such as SQL, NoSQL (e.g., MongoDB, Cassandra). Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack. Understanding of DevOps practices and principles. Knowledge of Agile and Scrum ways of working. Professional Experience: Experience with cloud platforms (preferably AWS). 4+ years of experience in software development, with a focus on microservices. Strong problem-solving skills and attention to detail. Soft Skills: Excellent communication and teamwork skills. Ability to work independently and manage multiple tasks effectively. Strong analytical and troubleshooting abilities. A proactive approach to learning and development.,
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Were looking for a talented and results-oriented Cloud Solutions Architect to work as a key member of Sureifys engineering team. Youll help build and evolve our next-generation cloud-based compute platform for digitally-delivered life insurance. Youll consider many dimensions such as strategic goals, growth models, opportunity cost, talent, and reliability. Youll collaborate closely with the product development team on platform feature architecture such that the architecture aligns with operational needs and opportunities. With the number of customers growing and growing, it’s time for us to mature the fabric our software runs on. This is your opportunity to make a large impact at a high-growth enterprise software company. Key Responsibilities : Collaborate with key stakeholders across our product, delivery, data and support teams to design scalable and secure application architectures on AWS using AWS Services like EC2, ECS, EKS, Lambdas, VPC, RDS, ElastiCache provisioned via Terraform Design and Implement CICD pipelines using Github, Jenkins and Spinnaker and Helm to automate application deployment and updates with key focus on container management, orchestration, scaling, optimizing performance and resource utilization, and deployment strategies Design and Implement security best practices for AWS applications, including Identity and Access Management (IAM), encryption, container security and secure coding practices Design and Implement best practices for Design and implement application observability using Cloudwatch and NewRelic with key considerations and focus on monitoring, logging and alerting to provide insights into application performance and health. Design and implement key integrations of application components and external systems, ensuring smooth and efficient data flow Diagnose and resolve issues related to application performance, availability and reliability Create, maintain and prioritise a quarter over quarter backlog by identifying key areas of improvement such as cost optimization, process improvement, security enhancements etc. Create and maintain comprehensive documentation outlining the infrastructure design, integrations, deployment processes, and configuration Work closely with the DevOps team and as a guide / mentor and enabler to ensure that the practices that you design and implement are followed and imbibed by the team Required Skills: Proficiency in AWS Services such as EC2, ECS, EKS, S3, RDS, VPC, Lambda, SES, SQS, ElastiCache, Redshift, EFS Strong Programming skills in languages such as Groovy, Python, Bash Shell Scripting Experience with CICD tools and practices including Jenkins, Spinnaker, ArgoCD Familiarity with IaC tools like Terraform or Cloudformation Understanding of AWS security best practices, including IAM, KMS Familiarity with Agile development practices and methodologies Strong analytical skills with the ability to troubleshoot and resolve complex issues Proficiency in using observability, monitoring and logging tools like AWS Cloudwatch, NewRelic, Prometheus Knowledge of container orchestration tools and concepts including Kubernetes and Docker Strong teamwork and communication skills with the ability to work effectively with cross function teams Nice to haves AWS Certified Solutions Architect - Associate or Professional
Posted 1 week ago
5.0 - 9.0 years
8 - 12 Lacs
Ahmedabad, Vadodara
Work from Office
Job Summary: We are seeking an experienced and highly motivated Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the design, implementation, performance tuning, and maintenance of relational (MSSQL, PostgreSQL) and NoSQL (MongoDB) databases, both on-premises and in cloud environments (AWS, Azure, GCP). You will ensure data integrity, security, availability, and optimal performance across all platforms. Key Responsibilities: Database Management & Optimization Install, configure, and upgrade database servers (MSSQL, PostgreSQL, MongoDB). Monitor performance, optimize queries, and tune databases for efficiency. Implement and manage database clustering, replication, sharding, and high availability. Cloud Database Administration Manage cloud-based database services (e.g., Amazon RDS, Azure SQL Database, GCP Cloud SQL, MongoDB Atlas). Automate backup, failover, patching, and scaling in the cloud environment. Ensure secure access, encryption, and compliance in the cloud. ETL and Dev Ops experience is desirable. Backup, Recovery & Security Design and implement robust backup and disaster recovery plans. Regularly test recovery processes to ensure minimal downtime. Apply database security best practices (roles, permissions, auditing, encryption). Scripting & Automation Develop scripts for automation (using PowerShell, Bash, Python, etc.). Automate repetitive DBA tasks using DevOps/CI-CD tools (Terraform, Ansible, etc.). Collaboration & Support Work closely with developers, DevOps, and system admins to support application development. Assist with database design, indexing strategy, schema changes, and query optimization. Provide 24/7 support for critical production issues (on-call rotation may apply). Key Skills & Qualifications: Bachelors degree in computer science, Information Technology, or related field. 5+ years of experience as a DBA with production experience in: o MSSQL Server (SQL Server 2016 and above) o PostgreSQL (including PostGIS, logical/physical replication) o MongoDB (including MongoDB Atlas, replica sets, sharding) Experience with cloud database services (AWS RDS, Azure SQL, GCP Cloud SQL). Strong understanding of performance tuning, indexing, and query optimization. Solid grasp of backup and restore strategies, disaster recovery, and HA setups. Familiarity with monitoring tools (e.g., Prometheus, Datadog, New Relic, Zabbix). Knowledge of scripting languages (PowerShell, Bash, or Python). Understanding of DevOps principles, version control (Git), CI/CD pipelines. Preferred Qualifications: Certification in any cloud platform (AWS/Azure/GCP). Microsoft Certified: Azure Database Administrator Associate. Experience with Kubernetes Operators for databases (e.g., Crunchy Postgres Operator). Experience with Infrastructure as Code (Terraform, CloudFormation).
Posted 1 week ago
5.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
We are looking for a skilled Oracle PCF Engineer with hands-on experience in the design, integration, deployment, and support of Policy and Charging Function (PCF) solutions in 4G/5G networks. The ideal candidate should have solid expertise in Oracle Communications PCRF/PCF , strong telecom domain knowledge, and the ability to troubleshoot complex policy-related network issues in real-time. Roles and Responsibilities Design, configure, and deploy Oracle PCF solutions across 4G/5G network environments. Perform integration with other core network elements such as CHF, SMF, AMF, and UDR. Implement and test policy rules as per service provider requirements. Collaborate with solution architects, testers, and system integrators for end-to-end service flow validation . Monitor and analyze PCF performance using tools and logs; perform root cause analysis for incidents. Provide L2/L3 support , including post-deployment troubleshooting and upgrades. Participate in capacity planning, software patching, and lifecycle management of PCF platforms. Maintain high availability and redundancy of the PCF nodes. Prepare and maintain technical documentation , configurations, and operational procedures. Work with cross-functional teams (Core, OSS/BSS, Cloud, Security) to align PCF behavior with network policies. Primary Skills In-depth knowledge of 3GPP standards for 4G LTE and 5G SA/NSA networks, particularly for Policy and Charging Control (PCC) architecture. Experience with Oracle Communications Policy Management (OCPM) Strong understanding of network functions like SMF, AMF, CHF, UDR , and interfaces such as N7, N15, Gx, Rx, Sy . Proficiency in Linux/Unix systems , shell scripting, and system-level debugging. Familiarity with Diameter, HTTP/2, and REST-based protocols . Experience in working with Kubernetes-based deployments , VNFs, or CNFs. Exposure to CI/CD tools , automation frameworks, and telecom service orchestration Good to Have: Experience with 5GC cloud-native deployments (e.g., on OCI, AWS, or OpenStack). Familiarity with Oracle OCI or other telecom cloud platforms . Knowledge of 5G QoS models , slice management , and dynamic policy handling . Experience with monitoring tools like Prometheus, Grafana, or ELK stack. Prior involvement in DevOps or SRE practices within telecom environments.
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
SUMMARY Sr. Site Reliability Engineer Keep Planet-Scale Systems Reliable, Secure, and Fast (On-site only) At Ajmera Infotech , we build planet-scale platforms for NYSE-listed clients from HIPAA-compliant health systems to FDA-regulated software that simply cannot fail. Our 120+ elite engineers design, deploy, and safeguard mission-critical infrastructure trusted by millions. Why You’ll Love It Dev-first SRE culture automation, CI/CD, zero-toil mindset TDD, monitoring, and observability baked in not bolted on Code-first reliability script, ship, and scale with real ownership Mentorship-driven growth with exposure to regulated industries (HIPAA, FDA, SOC2) End-to-end impact own infra across Dev and Ops Requirements Key Responsibilities Architect and manage scalable, secure Kubernetes clusters (k8s/k3s) in production Develop scripts in Python, PowerShell, and Bash to automate infrastructure operations Optimize performance, availability, and cost across cloud environments Design and enforce CI/CD pipelines using Jenkins, Bamboo, GitHub Actions Implement log monitoring and proactive alerting systems Integrate and tune observability tools like Prometheus and Grafana Support both development and operations pipelines for continuous delivery Manage infrastructure components including Artifactory, Nginx, Apache, IIS Drive compliance-readiness across HIPAA, FDA, ISO, SOC2 Must-Have Skills 3 8 years in SRE or infrastructure engineering roles Kubernetes (k8s/k3s) production experience Scripting: Python, PowerShell, Bash CI/CD tools: Jenkins, Bamboo, GitHub Actions Experience with log monitoring, alerting, and observability stacks Cross-functional pipeline support (Dev + Ops) Tooling: Artifactory, Nginx, Apache, IIS Performance, availability, and cost-efficiency tuning Nice-to-Have Skills Background in regulated environments (HIPAA, FDA, ISO, SOC2) Multi-OS platform experience Integration of Prometheus, Grafana, or similar observability platforms Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family. Flexible working hours and generous paid leave . High-end workstations and access to our in-house device lab. Sponsored learning: certifications, workshops, and tech conferences.
Posted 1 week ago
4.0 - 5.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Job Summary: We are looking for an experienced and detail-oriented PostgreSQL Database Administrator (DBA) to manage and maintain our database systems. The ideal candidate will have a strong background in PostgreSQL administration, performance tuning, backup and recovery, and high availability solutions. Key Responsibilities: Install, configure, upgrade, and maintain PostgreSQL database servers. Monitor database performance, implement changes, and apply new patches and versions when required. Ensure high availability, backup, and disaster recovery strategies are in place and tested. Perform regular database maintenance tasks including re-indexing, vacuuming, and tuning. Manage database access, roles, and permissions securely. Write and maintain scripts for automation of routine database tasks. Work closely with developers to optimize queries and schema design. Troubleshoot and resolve database-related issues promptly. Implement and monitor replication strategies (logical and physical replication). Perform regular security assessments and apply best practices to secure data. Participate in on-call rotation and provide production support as needed. Required Skills: Minimum 4 years of hands-on experience with PostgreSQL administration. Strong experience in performance tuning and query optimization. Experience with database backup, restore, and disaster recovery planning. Good understanding of PostgreSQL internals. Familiarity with tools like pgAdmin , pgBouncer , pgBackRest , or Patroni . Knowledge of Linux/Unix systems for managing PostgreSQL on those platforms. Experience with shell scripting and automation tools. Basic understanding of cloud platforms like AWS/GCP/Azure (RDS, Aurora, etc.) is a plus. Knowledge of monitoring tools like Prometheus , Grafana , or similar.
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Visakhapatnam, Onsite
Work from Office
Reports To: Senior Engineer/Team Lead Job Overview We are looking for dedicated back-end engineers to join our team and contribute to our server- side development processes. You will be responsible for designing and maintaining scalable web services, managing databases and collaborating with stakeholders to ensure seamless integration between front end and back end. Key Responsibilities 1. Develop and maintain server-side applications. 2. Build scalable and secure web services using backend programming languages like .NET, Python, Java, and Node.js. Manage databases and data storage. 3. Design and optimize databases on PostGRESQL MySQL, MongoDB, or SQL Server while ensuring secure and reliable data management. 4. Collaborate with team members. Work closely with front-end developers, designers, and project managers to ensure alignment between server-side functionality and user interfaces. 5. Implement APIs and frameworks. Design and implement RESTful APIs to facilitate communication between server-side applications and end-user systems. 6. Conduct troubleshooting and debugging. Identify and resolve performance bottlenecks, security vulnerabilities, and server-side errors to maintain system stability. 7. Optimize scalability and workflow. Develop reusable code and scalable solutions to accommodate future growth. 8. Integrate core backend systems with multiple external parties. 9. Perform test driven development. 10. Develop systems with logging and observability as core tenets. Key Technical Requirements 1. Programming Languages Proficient in at least one server-side language: Python, Java, Node.js, Go, or .NET Core. Writing clean, modular, and scalable code. 2. Frameworks & Libraries Experience with backend frameworks like: Python: Flask, FastAPI, Django Java: Spring Boot Node.js: Express.js, NestJS 3. API Development Strong expertise in designing and implementing RESTful APIs and GraphQL APIs. Understanding of API authentication (API Keys). Familiar with API documentation tools (Swagger/OpenAPI). 4. Database Management Experience with RDBMS (PostgreSQL, MySQL, MS SQL) and NoSQL databases (MongoDB). Writing optimized queries and knowledge of schema design and indexing. 5. Microservices Architecture Understanding and experience in building scalable microservices. Knowledge of message brokers like Kafka, RabbitMQ 6. Security Best Practices Knowledge of securing APIs (rate limiting, CORS, input sanitization). 7. Cloud & Experience with cloud platforms like AWS, Azure, or GCP. Containerization using Docker and orchestration with Kubernetes. (Bonus Skillset) CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI. 8. Version Control Proficient in using Git, GitHub/GitLab/Bitbucket workflows. 9. Testing & Debugging Writing unit, integration, and performance tests using frameworks like PyTest, JUnit, Mocha, or Postman. Proficient in using debugging and profiling tools. 10. Monitoring & Logging Familiarity with logging frameworks (ELK Stack, Prometheus, Grafana). Error monitoring with tools like Sentry, Datadog, or New Relic. 11. Agile Development Comfortable working in Agile/Scrum teams. Soft Skills 1. Strong communication and stakeholder management. 2. Ability to work as an individual contributor and team member. 3. Problem solving.
Posted 1 week ago
5.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About The Role About The Role : Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgeminis activity. Is able to manage difficult and complex situations calmly and professionally. Considers the bigger picture when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication
Posted 1 week ago
7.0 - 11.0 years
35 - 50 Lacs
Bengaluru
Work from Office
About the Role: This role is responsible for managing and maintaining complex, distributed big data ecosystems. It ensures the reliability, scalability, and security of large-scale production infrastructure. Key responsibilities include automating processes, optimizing workflows, troubleshooting production issues, and driving system improvements across multiple business verticals. Roles and Responsibilities: Manage, maintain, and support incremental changes to Linux/Unix environments. Lead on-call rotations and incident responses, conducting root cause analysis and driving postmortem processes. Design and implement automation systems for managing big data infrastructure, including provisioning, scaling, upgrades, and patching clusters. Troubleshoot and resolve complex production issues while identifying root causes and implementing mitigating strategies. Design and review scalable and reliable system architectures. Collaborate with teams to optimize overall system/cluster performance. Enforce security standards across systems and infrastructure. Set technical direction, drive standardization, and operate independently. Ensure availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolve, analyze, and respond to system outages and disruptions and implement measures to prevent similar incidents from recurring. Develop tools and scripts to automate operational processes, reducing manual workload, increasing efficiency and improving system resilience. Monitor and optimize system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaborate with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities. Develop and enforce SRE best practices and principles. Align across functional teams on priorities and deliverables. Drive automation to enhance operational efficiency. Adapt new technologies as and when the need arises and define architectural recommendations for new tech stacks. Preferred candidate profile Over 6 years of experience managing and maintaining distributed big data ecosystems. Strong expertise in Linux including IP, Iptables, and IPsec. Proficiency in scripting/programming with languages like Perl, Golang, or Python. Hands-on experience with the Hadoop stack (HDFS, HBase, Airflow, YARN, Ranger, Kafka, Pinot). Familiarity with open-source configuration management and deployment tools such as Puppet, Salt, Chef, or Ansible. Solid understanding of networking, open-source technologies, and related tools. Excellent communication and collaboration skills. DevOps tools: Saltstack, Ansible, docker, Git. SRE Logging and monitoring tools: ELK stack, Grafana, Prometheus, opentsdb, Open Telemetry. Good to Have: Experience managing infrastructure on public cloud platforms (AWS, Azure, GCP). Experience in designing and reviewing system architectures for scalability and reliability. Experience with observability tools to visualize and alert on system performance. Experience in massive petabyte scale data migrations, massive upgrades
Posted 1 week ago
3.0 - 8.0 years
6 - 12 Lacs
Pune
Work from Office
Greeting From Peoplefy !! We are hiring for one of our MNC Client based out of Pune , Yerawada location Immediate Joiner Only .Net or Java Expertise in MS SQL Server ITIL Process Monitoring tools Candidates with application or production support experiencE Interested candidates for above position kindly share your CVs on gayatri.pat @peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
Chennai
Work from Office
Job Description/Preferred Qualifications We are seeking a highly skilled and motivated MLOps Site Reliability Engineer (SRE) to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and performance of our machine learning infrastructure. You will work closely with data scientists, machine learning engineers, and software developers to build and maintain robust and efficient systems that support our machine learning workflows. This position offers an exciting opportunity to work on cutting-edge technologies and make a significant impact on our organization's success. Responsibilities : Design, implement, and maintain scalable and reliable machine learning infrastructure. Collaborate with data scientists and machine learning engineers to deploy and manage machine learning models in production. Develop and maintain CI/CD pipelines for machine learning workflows. Monitor and optimize the performance of machine learning systems and infrastructure. Implement and manage automated testing and validation processes for machine learning models. Ensure the security and compliance of machine learning systems and data. Troubleshoot and resolve issues related to machine learning infrastructure and workflows. Document processes, procedures, and best practices for machine learning operations. Stay up-to-date with the latest developments in MLOps and related technologies. Required Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Site Reliability Engineer (SRE) or in a similar role. Strong knowledge of machine learning concepts and workflows. Proficiency in programming languages such as Python, Java, or Go. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with containerization technologies like Docker and Kubernetes. Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills. Preferred Qualifications: Master's degree in Computer Science, Engineering, or a related field. Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Knowledge of data engineering and data pipeline tools such as Apache Spark, Apache Kafka, or Airflow. Experience with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Familiarity with infrastructure as code (IaC) tools like Terraform or Ansible. Experience with automated testing frameworks for machine learning models. Knowledge of security best practices for machine learning systems and data. Minimum Qualifications Master's Level Degree or Bachelor's Level Degree and related work experience of 2 years
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a DataOps Engineer, you will be responsible for designing and maintaining scalable ML model deployment infrastructure using Kubernetes and Docker. Your role will involve implementing CI/CD pipelines for ML workflows, ensuring security best practices are followed, and setting up monitoring tools to track system health, model performance, and data pipeline issues. You will collaborate with cross-functional teams to streamline the end-to-end lifecycle of data products and identify performance bottlenecks and data reliability issues in the ML infrastructure. To excel in this role, you should have strong experience with Kubernetes and Docker for containerization and orchestration, hands-on experience in ML model deployment in production environments, and proficiency with orchestration tools like Airflow or Luigi. Familiarity with monitoring tools such as Prometheus, Grafana, or ELK Stack, knowledge of security protocols, CI/CD pipelines, and DevOps practices in a data/ML environment are essential. Exposure to cloud platforms like AWS, GCP, or Azure is preferred. Additionally, experience with MLflow, Seldon, or Kubeflow, knowledge of data governance, lineage, and compliance standards, and understanding of data pipelines and streaming frameworks would be advantageous in this role. Your expertise in data pipelines, Docker, Grafana, Airflow, CI/CD pipelines, orchestration tools, cloud platforms, compliance standards, data governance, ELK Stack, Kubernetes, lineage, ML, streaming frameworks, ML model deployment, and DevOps practices will be key to your success in this position.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Full Stack Developer (Java + React) in the Fintech / Insurance domain, you will be responsible for leveraging your 10+ years of experience in full-stack development to deliver high-quality solutions. Your technical strengths should include proficiency in Java 11+, Spring Boot, and REST APIs. Additionally, you must possess a strong expertise in React, TypeScript, and modern frontend frameworks. In this role, experience with microservices and micro frontend architecture is crucial, along with cloud deployment experience, preferably on Azure. You should also have knowledge of Kafka, distributed systems, and API gateways. Familiarity with observability tools such as Grafana, ELK, Prometheus, and Splunk is highly desirable. Experience with Strapi CMS and OpenFeature for feature management would be an added advantage. Apart from your technical skills, strong leadership and communication abilities are essential. You should have experience leading Agile development teams and be capable of managing risks, dependencies, and 3rd-party integrations. Confidence in working with cross-functional and remote teams is a key requirement for this role. This is a contractual/temporary position with a contract length of 6 months. The work location is remote, and you must align with Singapore hours. Your role as a Senior Full Stack Developer will involve collaborating with a dynamic team to deliver innovative solutions in the Fintech / Insurance domain.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology, you will play a crucial role in driving innovation and modernizing complex and mission-critical systems. Your primary responsibility will be to solve intricate business problems by providing simple and effective solutions through code and cloud infrastructure. You will configure, maintain, monitor, and optimize applications and their associated infrastructure while continuously improving existing solutions. Your expertise in end-to-end operations, availability, reliability, and scalability will make you a valuable asset to the team. You will guide and support others in designing appropriate solutions and collaborate with software engineers to implement deployment strategies using automated continuous integration and continuous delivery pipelines. Your role will also involve designing, developing, testing, and implementing availability, reliability, and scalability solutions for applications. Additionally, you will be responsible for implementing infrastructure, configuration, and network as code for the applications and platforms under your purview. Collaboration with technical experts, stakeholders, and team members will be essential in resolving complex issues. You will utilize service level indicators and objectives to proactively address issues before they impact customers. Furthermore, you will support the adoption of site reliability engineering best practices within your team to ensure operational excellence. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 3 years of applied experience. Proficiency in site reliability principles and experience in implementing site reliability within applications or platforms is required. You should be adept in at least one programming language like Python, Java/Spring Boot, or .Net. Knowledge of software applications and technical processes in disciplines such as Cloud, AI, or Android is also essential. Experience in observability, continuous integration, continuous delivery tools, container technologies, networking troubleshooting, and collaboration within large teams is highly valued. Your proactive approach to problem-solving, eagerness to learn new technologies, and ability to identify innovative solutions will be crucial in this role. Preferred qualifications include experience in the banking or financial domain.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer at Wabtec Corporation, you will play a crucial role in performing CI/CD and automation design/validation activities. Reporting to the Technical Project Manager and working closely with the software architect, you will be responsible for adhering to internal processes, including coding rules, and documenting implementations accurately. Your focus will be on meeting Quality, Cost, and Time objectives set by the Technical Project Manager. To qualify for this role, you should hold a Bachelor's or Master's degree in engineering in Computer Science with a web option in CS, IT, or a related field. You should have 6 to 10 years of hands-on experience as a DevOps Engineer and possess the following abilities: - A good understanding of Linux systems and networking - Proficiency in CI/CD tools like GitLab - Knowledge of containerization technologies such as Docker - Experience with scripting languages like Bash and Python - Hands-on experience in setting up CI/CD pipelines and configuring Virtual Machines - Familiarity with C/C++ build tools like CMake and Conan - Expertise in setting up pipelines in GitLab for build, Unit testing, and static analysis - Experience with infrastructure as code tools like Terraform or Ansible - Proficiency in monitoring and logging tools such as ELK Stack or Prometheus/Grafana - Strong problem-solving skills and the ability to troubleshoot production issues - A passion for continuous learning and staying up-to-date with modern technologies and trends in the DevOps field - Familiarity with project management and workflow tools like Jira, SPIRA, Teams Planner, and Polarion In addition to technical skills, soft skills are also crucial for this role. You should have a good level of English proficiency, be autonomous, possess good interpersonal and communication skills, have strong synthesis skills, be a solid team player, and be able to handle multiple tasks efficiently. At Wabtec, we are committed to embracing diversity and inclusion. We value the variety of experiences, expertise, and backgrounds that our employees bring and aim to create an inclusive environment where everyone belongs. By fostering a culture of leadership, diversity, and inclusion, we believe that we can harness the brightest minds to drive innovation and create limitless opportunities. If you are ready to join a global company that is revolutionizing the transportation industry and are passionate about driving exceptional results through continuous improvement, then we invite you to apply for the role of Lead/Engineer DevOps at Wabtec Corporation.,
Posted 1 week ago
1.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an Associate Manager - Data IntegrationOps, you will play a crucial role in supporting and managing data integration and operations programs within our data organization. Your responsibilities will involve maintaining and optimizing data integration workflows, ensuring data reliability, and supporting operational excellence. To succeed in this position, you will need a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Your primary duties will include assisting in the management of Data IntegrationOps programs, aligning them with business objectives, data governance standards, and enterprise data strategies. You will also be involved in monitoring and enhancing data integration platforms through real-time monitoring, automated alerting, and self-healing capabilities to improve uptime and system performance. Additionally, you will help develop and enforce data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Collaboration with cross-functional teams will be essential to optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. You will also contribute to promoting a data-first culture by aligning with PepsiCo's Data & Analytics program and supporting global data engineering efforts across sectors. Continuous improvement initiatives will be part of your responsibilities to enhance the reliability, scalability, and efficiency of data integration processes. Furthermore, you will be involved in supporting data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Developing API-driven data integration solutions using REST APIs and Kafka, deploying and managing cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, and participating in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins will also be part of your role. Your qualifications should include at least 9 years of technology work experience in a large-scale, global organization, preferably in the CPG (Consumer Packaged Goods) industry. You should also have 4+ years of experience in Data Integration, Data Operations, and Analytics, as well as experience working in cross-functional IT organizations. Leadership/management experience supporting technical teams and hands-on experience in monitoring and supporting SAP BW processes are also required qualifications for this role. In summary, as an Associate Manager - Data IntegrationOps, you will be responsible for supporting and managing data integration and operations programs, collaborating with cross-functional teams, and ensuring the efficiency and reliability of data integration processes. Your expertise in enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support will be key to your success in this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled DevOps Specialist with over 3 years of experience, seeking to join a global automotive team with locations in Kochi, Pune, and Chennai. Your primary role will involve managing operations, system monitoring, troubleshooting, and supporting automation workflows to ensure the operational stability and excellence of enterprise IT projects. You will play a crucial part in overseeing critical application environments for leading companies in the automotive industry. Your responsibilities will include performing daily maintenance tasks to ensure application availability and system performance through proactive incident tracking, log analysis, and resource monitoring. Additionally, you will be expected to monitor and respond to tickets raised by the DevOps team or end-users, support users with troubleshooting, maintain detailed incident logs, track SLAs, and prepare root cause analysis reports. You will also assist in scheduled changes, releases, and maintenance activities while identifying and tracking recurring issues. Furthermore, you will be responsible for maintaining process documentation, runbooks, and knowledge base articles, providing regular updates to stakeholders on incidents and resolutions. You will also manage and troubleshoot CI/CD tools such as Jenkins, GitLab, container platforms like Docker and Kubernetes, and cloud services including AWS and Azure. To excel in this role, you should have proficiency in logfile analysis and troubleshooting (ELK Stack), Linux administration, and monitoring tools such as AppDynamics, Checkmk, Prometheus, and Grafana. Experience with security tools like Black Duck, SonarQube, Dependabot, and OWASP is essential. Hands-on experience with Docker, familiarity with DevOps principles, and ticketing tools like ServiceNow are also required. Experience in handling confidential data and safety-sensitive systems, along with strong analytical, communication, and organizational skills, will be beneficial. Additionally, you should possess the ability to work effectively in a team environment. Optional qualifications include experience in the automotive or manufacturing industry, particularly with production management systems, and familiarity with IT process frameworks like SCRUM and ITIL. In summary, as a DevOps Specialist, you will play a vital role in ensuring the operational stability and excellence of enterprise IT projects for leading companies in the automotive industry by managing operations, system monitoring, troubleshooting, and supporting automation workflows. Your expertise in tools and technologies such as ELK Stack, Docker, Jenkins, AWS, and Azure, along with your strong analytical and communication skills, will be instrumental in your success in this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
About Us: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It's how we've contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Analytics group is part of London Stock Exchange Group's Data & Analytics Technology division. Analytics has established a very strong reputation for providing prudent and reliable analytic solutions to financial industries. With a strong presence in the North American financial markets and rapidly growing in other markets, the group is now looking to increase its market share globally by building new capabilities as Analytics as a Service - A one-stop-shop solution for all analytics needs through API and Cloud-first approach. Position Summary: Analytics DevOps group is looking for a highly motivated and skilled DevOps Engineer to join our dynamic team to help build, deploy, and maintain our cloud and on-prem infrastructure and applications. You will play a key role in driving automation, monitoring, and continuous improvement in our development, modernizations, and operational processes. Key Responsibilities & Accountabilities: Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Helm Charts, CloudFormation, or Ansible to ensure consistent and scalable environments. CI/CD Pipeline Development: Build, optimize, and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitLab, GitHub, or similar tools. Cloud and on-prem infrastructure Management: Work with Cloud providers (Azure, AWS, GCP) and on-prem infrastructure (VMware, Linux servers) to deploy, manage, and monitor infrastructure and services. Automation: Automate repetitive tasks, improve operational efficiency, and reduce human intervention for building and deploying applications and services. Monitoring & Logging: Work with SRE team to set up monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, or others to ensure high availability and performance of applications and infrastructure. Collaboration: Collaborate with architects, operations, and developers to ensure seamless integration between development, testing, and production environments. Security Best Practices: Implement and enforce security protocols/procedures, including access controls, encryption, and vulnerability scanning and remediation. Provide support for issue resolution related to application deployment and/or DevOps-related activities. Essential Skills, Qualifications & Experience: - Bachelor's or Master's degree in computer science, engineering, or a related field with experience (or equivalent 3-5 years of practical experience). - 5+ years of experience in practicing DevOps. - Proven experience as a DevOps Engineer or Software Engineer in an agile, cloud-based environment. - Strong understanding of Linux/Unix system management. - Hands-on experience with cloud platforms (AWS, Azure, GCP), Azure preferred. - Proficient in Infrastructure automation tools such as Terraform, Helm Charts, Ansible, etc. - Strong experience with CI/CD tools - GitLab, Jenkins. - Experience/knowledge of version control systems - Git, GitLab, GitHub. - Experience with containerization (Kubernetes, Docker) and orchestration. - Experience in modern monitoring & logging tools such as Grafana, Prometheus, Datadog. - Working experience in scripting languages such as Bash, Python, or Groovy. - Strong problem-solving and troubleshooting skills. - Excellent communication skills and ability to work in team environments. - Experience with serverless architecture and microservices is a plus. - Strong knowledge of networking concepts (DNS, Load Balancers, etc.) and security practices (Firewalls, encryptions). - Working in an Agile/Scrum environment is a plus. - Certifications in DevOps or Cloud Technologies (e.g., Azure DevOps Solutions, AWS Certified DevOps) are a plus. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence, and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision-making and everyday actions. Working with us means that you will be part of a dynamic organization of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy, and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it's used for, and how it's obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kasaragod, kerala
On-site
As a DevOps Engineer with 3+ years of experience, you will be responsible for managing and optimizing the infrastructure that powers the way.com platform, ensuring high availability, security, and scalability. Your role will involve collaborating with cross-functional teams to automate processes, streamline CI/CD pipelines, and ensure smooth deployment across various environments. Your primary goal will be to manage and optimize the infrastructure while ensuring high availability. You will work closely with cross-functional teams to achieve this objective. Your expertise in cloud platforms such as Digital Ocean, AWS, CI/CD Pipeline Jenkins, Infrastructure as Code Tools Terraform, Ansible, and container tools like Docker and Kubernetes will play a key role in your success. Key Responsibilities: - Design, implement, and maintain scalable infrastructure using cloud platforms like Digital Ocean, AWS, or Azure. - Automate infrastructure provisioning and manage configuration management using tools like Terraform and Ansible. - Build, manage, and optimize continuous integration/continuous deployment (CI/CD) pipelines using Jenkins. - Emphasize automation, monitoring, and continuous improvement within a DevOps culture. - Collaborate with development, product, and QA teams to enhance system performance, deployment processes, and incident response times. - Monitor system health and performance using tools like Prometheus, Grafana, and Cloud-specific monitoring tools. - Manage and optimize container orchestration platforms (Docker, Kubernetes) for efficient application delivery. - Implement security best practices across infrastructure, including patch management, vulnerability scanning, and access control. - Troubleshoot and resolve production issues to ensure minimal downtime and disruption to services. Secondary Skills (if applicable): - Experience with serverless architectures and microservices. - Familiarity with monitoring and logging tools like ELK Stack, Datadog, or Splunk. - Knowledge of database management and performance tuning (MySQL, PostgreSQL, or NoSQL). - Experience with configuration management tools like Ansible, Chef, or Puppet. If you are passionate about DevOps, automation, and working in a collaborative environment to optimize infrastructure and deployment processes, this role at Trivandrum could be the perfect fit for you.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
We are looking for a DevOps + Kubernetes Engineer to join our team in Bengaluru or Hyderabad. Your main responsibility will be to build, maintain, and scale our infrastructure using Kubernetes and DevOps best practices. You will collaborate with development and operations teams to implement automation processes, manage CI/CD pipelines, and ensure efficient infrastructure management for scalable and reliable applications. Your key responsibilities will include designing, implementing, and maintaining Kubernetes clusters for production, staging, and development environments. You will also manage CI/CD pipelines for automated application deployment and infrastructure provisioning, utilizing tools such as Helm, Terraform, or Ansible. Monitoring and optimizing performance, scalability, and availability of applications and infrastructure will be part of your duties, as well as collaborating with software engineers to enhance system performance and optimize cloud infrastructure. Troubleshooting, debugging, and resolving production environment issues in a timely manner, implementing security best practices for managing containerized environments and DevOps workflows, and contributing to the continuous improvement of development and deployment processes using DevOps tools are also essential aspects of the role. The ideal candidate should have 6-8 years of experience in DevOps with a strong focus on Kubernetes and containerized environments. Expertise in Kubernetes cluster management and orchestration, proficiency in CI/CD pipeline tools like Jenkins, GitLab CI, or CircleCI, and strong experience with cloud platforms such as AWS, Azure, or GCP are required. Knowledge of Docker for containerization, Helm for managing Kubernetes applications, infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible, and familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Datadog are also important. Strong scripting skills in Bash, Python, or Groovy, experience with version control systems like Git, excellent problem-solving and troubleshooting skills, especially in distributed environments, and a good understanding of security best practices in cloud and containerized environments are necessary qualifications for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Candescent is the largest non-core digital banking provider, bringing together transformative technologies that power and connect account opening, digital banking, and branch solutions for banks and credit unions of all sizes on any core. Candescent solutions are trusted by banks and credit unions of all sizes and power the top three U.S. mobile banking apps. The company offers an extensive portfolio of industry-leading products and services with an ecosystem of out-of-the-box and integrated partner solutions. With an API-first architecture and developer tools, financial institutions can optimize and expand their capabilities by seamlessly integrating custom-built or third-party solutions. Candescent's connected in-person, remote, and digital experiences reinvent customer service across all channels. Financial institutions using Candescent's solutions have self-service configuration and marketing tools to control branding, targeted messaging, and user experience. Data-driven analytics and reporting tools provide valuable insights to drive growth and profitability. Clients receive expert, end-to-end support for conversions, implementations, custom development, and customer care. Candescent is looking for a SW Dev Ops Engineer II - Cloud Platform with 4-6 years of experience to join their team in Bangalore(Ecospace). As a senior engineer in the organization's cloud, you will play a vital role in shaping the future of customer interactions with money. The primary focus of the Cloud Engineering team in the digital banking domain is on enhancing the reliability and performance of the Digital First banking platform. As a Site Reliability Engineer (SRE) on the Cloud Platform team, you will implement and enforce robust standards and practices to ensure the security, availability, and reliability of services. You will provide guidance, tooling, and best practices to development teams, collaborate with Product Development and Production Operations, and deploy and support Digital Banking SaaS offerings in the cloud. Responsibilities include providing technical leadership, insight, and guidance, building and supporting the Cloud Platform in GCP, maintaining CI/CD Pipelines using GitOps principles, contributing to operational automation and self-service frameworks, driving continuous adoption and improvement of SRE methodology, managing projects, and collaborating with various teams to deliver a world-class cloud platform. The required skills/experience for this role include 4+ years of GCP cloud experience, expertise in Kubernetes, cloud networking, GitOps processes, working with DevOps/SRE & Agile methodologies, IAC technologies (especially Terraform), experience with cloud migrations, and a degree in Computer Science or related field. Desired skill sets include Docker, Kubernetes, Google Cloud Platform, cloud migrations, IAC (Terraform), Python scripting, CI/CD (GitHub Actions), version control (GIT), cloud networking, GitOps (ArgoCD), Nginx, and experience with Prometheus/Dynatrace or other logging tools. Candidates must have high initiative, be clear communicators, and must pass screening criteria applicable to the job. Candescent only accepts resumes from agencies on the preferred supplier list and is not responsible for unsolicited resumes forwarded to their applicant tracking system or employees.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough