Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Engineer, DevOps at Toyota Connected India, you will have the opportunity to work in a collaborative and fast-paced environment focused on creating infotainment solutions on embedded and cloud platforms. You will be part of a team that values continual improvement, innovation, and delivering exceptional value to customers. Your role will involve being hands-on with cloud platforms like AWS and Google Cloud Platform, utilizing containerization and Kubernetes for container orchestration, and working with infrastructure automation and configuration management tools such as Terraform, CloudFormation, and Ansible. You will be expected to have a strong proficiency in scripting languages like Python, Bash, or Go, experience with monitoring and logging solutions including Prometheus, Grafana, ELK Stack, or Datadog, and knowledge of networking concepts, security best practices, and infrastructure monitoring. Additionally, your responsibilities will include working with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a flexible dress code policy. You will have the opportunity to contribute to the development of products that enhance the safety and convenience of millions of customers. Moreover, you will be working in a new cool office space and enjoying other awesome benefits. Toyota Connected's core values are EPIC - Empathetic, Passionate, Innovative, and Collaborative. You will be encouraged to make decisions empathetically, strive to build something great, experiment with innovative ideas, and work collaboratively with your teammates to achieve success. Join us at Toyota Connected to be part of a team that is reimagining mobility for today and the future!,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,
Posted 2 weeks ago
2.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Manager, you will be responsible for leading our DevOps efforts across a suite of modern and legacy applications, including Odoo (Python), Magento (PHP), Node.js, and other web-based platforms. Your main duties will include managing, mentoring, and growing a team of DevOps engineers, overseeing the deployment and maintenance of various applications, designing and managing CI/CD pipelines, handling environment-specific configurations, containerizing applications, implementing and maintaining Infrastructure as Code, monitoring application health and infrastructure, ensuring system security and compliance, optimizing cloud cost and performance, collaborating with cross-functional teams, and troubleshooting technical issues. To be successful in this role, you should have at least 8 years of experience in DevOps/Cloud/System Engineering roles with real hands-on experience, including 2+ years of experience managing or leading DevOps teams. You should have experience supporting and deploying applications like Odoo, Magento, and Node.js, along with strong scripting skills in Bash, Python, PHP CLI, or Node CLI. Additionally, you should have a deep understanding of Linux system administration, networking fundamentals, AWS/Azure/GCP infrastructure, Git, SSH, reverse proxies, and load balancers. Good communication skills and client management exposure are also essential for this position. Preferred certifications for this role include AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that would be beneficial for this position include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, VAPT 2.0, WCAG compliance, and infrastructure security best practices. In summary, as a DevOps Manager, you will play a crucial role in leading our DevOps efforts and ensuring the smooth deployment, maintenance, and optimization of various applications while collaborating with different teams and implementing best practices in infrastructure management and security.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a valuable addition to our technology team. Your primary responsibility will be to design, implement, and maintain scalable and secure cloud infrastructure that supports our mobile and web applications. Your role is crucial in ensuring system reliability, performance, and cost efficiency across different environments. You will work with Google Cloud Platform (GCP) to design, configure, and manage cloud infrastructure. Your tasks will include implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. Additionally, you will be developing and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Real-time monitoring, crash alerting, logging systems, and health dashboards will be set up by you using industry-leading tools. Managing and optimizing Redis, job queues, caching layers, and backend request loads will also be part of your responsibilities. You will automate data backups, enforce secure access protocols, and implement disaster recovery systems. Collaborating with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load is crucial. Infrastructure security audits will be conducted by you to recommend best practices for preventing downtime and security breaches. Monitoring and optimizing cloud usage and billing to ensure a cost-effective and scalable architecture will also fall under your purview. You should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Proficiency with Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI is required. Familiarity with monitoring tools such as Grafana, Prometheus, NewRelic, or Datadog is expected. A deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms is necessary. Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure is beneficial. Working knowledge of Redis, Socket.IO, and message queuing systems like RabbitMQ or Kafka will be advantageous. Preferred qualifications include a Google Cloud Professional certification or equivalent. Experience in optimizing systems for high-concurrency, low-latency environments and familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible are considered a plus.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Engineer, you will define and implement DevOps strategies that are closely aligned with the business goals. Your primary responsibility will be to lead cross-functional teams in order to enhance collaboration among development, QA, and operations teams. This involves designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate build, test, and deployment processes, thereby accelerating release cycles. Furthermore, you will be tasked with implementing and managing Infrastructure as Code using tools such as Terraform, CloudFormation, Ansible, among others. Your expertise will be crucial in managing cloud platforms like AWS, Azure, or Google Cloud. It will also be your responsibility to monitor and mitigate security risks in CI/CD pipelines and infrastructure, as well as setting up observability tools like Prometheus, Grafana, Splunk, Datadog, etc. In addition, you will play a key role in implementing proactive alerting and incident response processes. This will involve leading incident response efforts and conducting root cause analysis (RCA) when necessary. Documenting DevOps processes, best practices, and system architectures will also be part of your routine tasks. As a DevOps Engineer, you will continuously evaluate and implement new DevOps tools and technologies to enhance efficiency and productivity. Moreover, you will be expected to foster a culture of learning and knowledge sharing within the organization, promoting collaborative growth and development among team members.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Linux & Cloud Administrator at SAP, you will play a key role in supporting the seamless 24/7 operations of our cloud platform across Azure, AWS, Google Cloud, and SAP data centers. Your primary responsibilities will involve ensuring the smooth operation of business-critical SAP systems in the cloud, leveraging technologies such as Prometheus, Grafana, Kubernetes, Ansible, ArgoCD, AWS, GitHub Actions, and more. Your tasks will include network troubleshooting, architecture design, cluster setup and configuration, and the development of automation solutions to deliver top-notch cloud services for SAP applications to enterprise customers globally. You will be part of the ECS Delivery XDU team, responsible for the operation of SAP Enterprise Cloud Services (ECS) Delivery, a managed services provider offering SAP applications through the HANA Enterprise Cloud. At SAP, we are committed to building a workplace culture that values collaboration, embraces diversity, and is focused on creating a better world. Our company ethos revolves around a shared passion for helping the world run better, with a strong emphasis on learning and development, recognizing individual contributions, and offering a range of benefit options for our employees. SAP is a global leader in enterprise application software, with a mission to help customers worldwide work more efficiently and leverage business insights effectively. With a cloud-based approach and a dedication to innovation, SAP serves millions of users across various industries, driving solutions for ERP, database, analytics, intelligent technologies, and experience management. As a purpose-driven and future-focused organization, SAP fosters a highly collaborative team environment and prioritizes personal development, ensuring that every challenge is met with the right solution. At SAP, we believe in the power of inclusion and diversity, supporting the well-being of our employees and offering flexible working models to enable everyone to perform at their best. We recognize the unique strengths that each individual brings to our company, investing in our workforce to unleash their full potential and create a more equitable world. As an equal opportunity workplace, SAP is committed to providing accessibility accommodations to applicants with disabilities and promoting a culture of equality and empowerment. If you are interested in joining our team at SAP and require accommodation during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. We are dedicated to fostering an environment where all talents are valued, and every individual has the opportunity to thrive.,
Posted 2 weeks ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a proactive and detail-oriented L1 DataOps Monitoring Engineer to support our data pipeline operations. This role involves monitoring, identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and improve reliability. Roles and Responsibilities Key Responsibilities: Monitor data pipelines, jobs, and workflows using tools like Airflow, Control-M, or custom monitoring dashboards. Acknowledge and investigate alerts from monitoring tools (Datadog, Prometheus, Grafana, etc.). Perform first-level triage for job failures, delays, and anomalies. Log incidents and escalate to L2/L3 teams as per SOP. Maintain shift handover logs and daily operational reports. Perform routine system checks and health monitoring of data environments. Follow predefined runbooks to troubleshoot known issues. Coordinate with application, infrastructure, and support teams for timely resolution. Participate in shift rotations including nights/weekends/public holidays. Skills and Qualifications: Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). 0–2 years of experience in IT support, monitoring, or NOC environments. Basic understanding of data pipelines, ETL/ELT processes. Familiarity with monitoring tools (Datadog, Grafana, CloudWatch, etc.). Exposure to job schedulers (Airflow, Control-M, Autosys) is a plus. Good verbal and written communication skills. Ability to remain calm and effective under pressure. Willingness to work in a 24x7 rotational shift model. Good to Have (Optional): Knowledge of cloud platforms (AWS/GCP/Azure) Basic SQL or scripting knowledge (Shell/Python) ITIL awareness or ticketing systems experience (e.g., ServiceNow, JIRA)
Posted 2 weeks ago
3.0 - 7.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Job Title: SDE-2/3 LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 2 weeks ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job TitleAssociate III-Software Engineering LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 2 weeks ago
21.0 - 31.0 years
35 - 42 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 2 weeks ago
2.0 - 6.0 years
12 - 15 Lacs
Navi Mumbai
Work from Office
Key Responsibilities: Manage and administer complex multi-cloud environments (AWS, GCP, Azure). Monitor infrastructure performance and troubleshoot complex issues at the L3 level. Design and implement automation scripts using tools like Terraform, Ansible, CloudFormation, or Bicep. Optimize cloud cost, performance, and security in line with best practices. Support deployment and maintenance of production, development, and staging environments. Collaborate with DevOps, Networking, and Security teams to ensure seamless operations. Implement and monitor backup, disaster recovery, and high-availability strategies. Conduct root cause analysis (RCA) for critical incidents and service disruptions. Stay updated with evolving cloud technologies and recommend improvements. Participate in 24x7 on-call rotation for critical incident support. Technical Skills: Cloud Platforms: Deep expertise in AWS, GCP, and Azure services (EC2, VPC, IAM, S3, AKS, GKE, App Services, etc.) Infrastructure as Code (IaC): Hands-on with Terraform, Ansible, ARM Templates, CloudFormation. Scripting: Strong in PowerShell, Python, or Bash scripting. CI/CD Pipelines: Experience with Jenkins, GitHub Actions, Azure DevOps, or similar tools. Monitoring Tools: Proficient in tools like CloudWatch, Azure Monitor, Stackdriver, Prometheus, Grafana. Security & Governance: Knowledge of IAM, RBAC, security groups, policies, and compliance. Containers & Orchestration: Familiarity with Kubernetes, Docker, AKS, GKE, EKS. Certifications (Preferred): AWS Certified SysOps Administrator / Solutions Architect Associate or Professional Microsoft Certified: Azure Administrator Associate or Architect Expert Google Associate Cloud Engineer or Professional Cloud Architect
Posted 2 weeks ago
0.0 - 5.0 years
15 - 20 Lacs
Chennai
Work from Office
Job Title Tech Lead/Cloud ArchitectExperience 0-5 YearsLocation Remote : A NASDAQ-listed company that has effectively maintained its position as the front-runner In the food and beverage sector is looking to onboard A Tech Lead to guide and manage the development team on various projects. The Tech Lead will be responsible for overseeing the technical direction of the projects, Ensuring the development of high-quality, scalable, and maintainable code. The talent will be interacting with other talents as well as an internal cross-functional team. Required Skills: Cloud architecture using microservices design Data Modelling/Design API Design API Contracts React Java Azure ADO RESTful API, GraphQL, SQL/NoSQL DB Experience with ADF, Databricks CI/CD, Sonarqube, Snyk, Prometheus, Grafana Responsibilities: Collaborate with Product and Data teams. Ensure a clear understanding of requirements. Architect and design microservices based enterprise web application Build data intensive, UI-rich microservices based enterprise applications that is scalable, Performant, secure using Cloud best practices in Azure Offer Details: Full-time dedication (40 hours/week) REQUIRED3-hour overlap with CST (Central Standard Time) Interview Process: 2-step interviewinitial screening and technical interview
Posted 2 weeks ago
3.0 - 8.0 years
20 - 35 Lacs
Gurugram, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job location: Mumbai/Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside SecOps engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems. Apply here: https://sunking.pinpointhq.com/postings/b63a7111-1b98-48de-8528-4bb4bb77436f
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Spring Boot Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are built to the highest standards of quality and performance. You will also participate in discussions to refine project goals and contribute to the overall success of the team. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and design.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - DS & Algo, Java 17/Java EE, Spring Boot, CICD- Web-Services using RESTful, Spring framework, Caching techniques, PostgreSQL SQL, Junit for testing, and containerization with Kubernetes/Docker. Airflow, GCP, Spark, Kafka - Hands on experiencing in building alerting/monitoring/logging for micro services using frameworks like Open Observe/Splunk, Grafana, Prometheus Additional Information:- The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
12.0 - 16.0 years
37 - 42 Lacs
Bengaluru
Work from Office
Job Objective: As AVP/VP Architect- Lead the design and development of scalable, reliable, and high-performance architecture for Zwayam. Job Description: In this role you will: Hands-on Coding & Code Review: Actively participate in coding and code reviews, ensuring adherence to best practices, coding standards, and performance optimization. High-Level and Low-Level Design: Create comprehensive architectural documentation that guides the development team and ensures the scalability and security of the system. Security Best Practices: Implement security strategies, including data encryption, access control, and threat detection, ensuring the platform adheres to the highest security standards. Compliance Management: Oversee compliance with regulatory requirements such as GDPR, including data protection, retention policies, and audit readiness. Disaster Recovery & Business Continuity: Design and implement disaster recovery strategies to ensure the reliability and continuity of the system in case of failures or outages. Scalability & Performance Optimization: Ensure the system architecture can scale seamlessly and optimize performance as business needs grow. Monitoring & Alerting: Set up real-time monitoring and alerting systems to ensure proactive identification and resolution of performance bottlenecks, security threats, and system failures. Cross-Platform Deployment: Architect flexible, cloud-agnostic solutions and manage deployments on Azure and AWS platforms. Containerization & Orchestration: Use Kubernetes and Docker Swarm for container management and orchestration to achieve a high degree of automation and reliability in deployments. Data Management: Manage database architecture using MySQL, MongoDB and ElasticSearch to ensure efficient storage, retrieval, and management of data. Message Queuing Systems: Design and manage asynchronous communication using Kafka and Redis for event-driven architecture. Collaboration & Leadership: Work closely with cross-functional teams including developers, product managers, and other stakeholders to deliver high-quality solutions on time. Mentoring & Team Leadership: Mentor, guide, and lead the engineering team, fostering technical growth and maintaining adherence to architectural and coding standards. Required Skills: Experience: 12+ years of experience in software development and architecture, with at least 3 years in a leadership/architect role. Technical Expertise: Proficient in Java and related frameworks like Spring-boot Experience with databases like MySQL, MongoDB, ElasticSearch, and message queuing systems like Kafka, Redis. Proficiency with containerization (Docker, Docker Swarm) and orchestration (Kubernetes). Solid experience with cloud platforms (Azure, AWS, GCP). Experience with monitoring tools (e.g., Prometheus, Grafana, ELK stack) and alerting systems for real-time issue detection and resolution. Compliance & Security: Hands-on experience in implementing security best practices. Familiarity with compliance frameworks such as GDPR and DPDP Architecture & Design: Proven experience in high-level and low-level architectural design. Problem-Solving: Strong analytical and problem-solving skills, with the ability to handle complex and ambiguous situations. Leadership: Proven ability to lead teams, influence stakeholders, and drive change. Communication: Excellent verbal and written communication skills Our Ideal Candidate: The ideal candidate should possess a deep understanding of the latest architectural patterns, cloud-native design, and security practices. They should be adept at translating business requirements into scalable and efficient technical solutions. A proactive, hands-on approach to problem-solving and a passion for innovation are essential. Strong leadership and mentoring skills are crucial to drive a high-performance team and foster technical excellence.
Posted 2 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Job Summary Synechron is seeking a skilled and experienced Lead Java Developer to oversee the development, deployment, and support of complex enterprise applications. This role involves leading technical initiatives, ensuring best practices in software engineering, and collaborating across teams to deliver cloud-enabled, scalable, and efficient solutions. The successful candidate will contribute to our strategic technology objectives while fostering innovation, best coding practices, and continuous improvement in a dynamic environment. Software Requirements Required: Proficiency in Java (latest stable versions), with extensive experience in building enterprise-scale applications Familiarity with Kettle jobs (Pentaho Data Integration) Operating systems Unix/Linux Scripting languages Shell Scripting , Perl , Python Job scheduling tools Control-M , Autosys Database technologies SQL Server , Oracle , or MongoDB Monitoring tools such as Grafana , Prometheus , or Splunk Container orchestration Kubernetes and OpenShift Messaging middleware Kafka , EMS , RabbitMQ Big data platforms Apache Flink , Spark , Apache Beam , Hadoop , Gemfire , Ignite Continuous Integration/Delivery tools Jenkins , TeamCity , SonarQube , Git Preferred: Experience with cloud platforms (e.g., AWS) Additional data processing frameworks or cloud deployment tools Knowledge of security best practices in enterprise environments Overall Responsibilities Lead the design, development, and deployment of scalable Java-based solutions aligned with business needs Analyze existing system logic, troubleshoot issues, and implement improvements or fixes Collaborate with business stakeholders and technical teams to gather requirements, propose solutions, and document functionalities Define system architecture, including APIs, data flows, and system integration points Develop and maintain comprehensive documentation, including technical specifications, deployment procedures, and API documentation Support application deployment, configurations, and release management within CI/CD pipelines Implement monitoring and alerting solutions using tools like Grafana, Prometheus, or Splunk for operational insights Ensure application security and compliance with enterprise security standards Mentor junior team members and promote development best practices across the team Performance Outcomes: Robust, scalable, and maintainable applications Reduced system outages and improved performance metrics Clear, complete documentation supporting operational and development teams Effective team collaboration and technical leadership Technical Skills (By Category) Programming Languages: Essential Java PreferredScripting languages ( Shell , Perl , Python ) Frameworks and Libraries: EssentialJava frameworks such as Spring Boot , Spring Cloud PreferredMicroservices architecture, messaging, or big data libraries Databases/Data Management: Essential SQL Server , Oracle , MongoDB PreferredData grid solutions like Gemfire or Ignite Cloud Technologies: PreferredHands-on experience with AWS , Azure , or similar cloud platforms, especially for container deployment and orchestration Containerization and Orchestration: Essential Kubernetes , OpenShift DevOps & CI/CD: Essential Jenkins , TeamCity , SonarQube , Git Monitoring & Security: PreferredFamiliarity with Grafana , Prometheus , Splunk Understanding of data security, encryption, and access control best practices Experience Requirements Minimum 7+ years of professional experience in Java application development Proven experience leading enterprise projects, especially involving distributed systems and big data technologies Experience designing and deploying cloud-ready applications Familiarity with SDLC processes, Agile methodologies, and DevOps practices Experience with application troubleshooting, system integration, and performance tuning Day-to-Day Activities Lead project meetings, coordinate deliverables, and oversee technical planning Develop, review, and optimize Java code, APIs, and microservices components Collaborate with development, QA, and operations teams to ensure smooth deployment and operation of applications Conduct system analysis, performance tuning, and troubleshooting of live issues Document system architecture, deployment procedures, and operational workflows Mentor junior developers, review code, and promote best engineering practices Stay updated on emerging technologies, trends, and tools applicable to enterprise software development Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, or a related field Relevant certifications (e.g., Java certifications, cloud certifications) are advantageous Extensive hands-on experience in Java, microservices, and enterprise application development Exposure to big data, cloud deployment, and container orchestration preferred Professional Competencies Strong analytical and problem-solving skills for complex technical challenges Leadership qualities, including mentoring and guiding team members Effective communication skills for stakeholder engagement and documentation Ability to work independently and collaboratively within Agile teams Continuous improvement mindset, eager to adapt and incorporate new technologies Good organizational and time management skills for handling multiple priorities S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .
Posted 2 weeks ago
4.0 - 8.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a skilled Apache Solr Engineer to design, implement, and maintain scalable and high-performance search solutions. The ideal candidate will have hands-on experience with Solr/SolrCloud, strong analytical skills, and the ability to work in cross-functional teams to deliver efficient search functionalities across enterprise or customer-facing applications. Experience: 4–8 years Roles and Responsibilities Key Responsibilities: Design, develop, and maintain enterprise-grade search solutions using Apache Solr and SolrCloud . Develop and optimize search indexes and schema based on use cases like product search, document search, or order/invoice search. Integrate Solr with backend systems, databases and APIs. Implement full-text search , faceted search , auto-suggestions , ranking , and relevancy tuning . Optimize search performance, indexing throughput, and query response time. Ensure data consistency and high availability using SolrCloud and Zookeeper (cluster coordination & configuration management). Monitor search system health and troubleshoot issues in production. Collaborate with product teams, data engineers, and DevOps teams for smooth delivery. Stay up to date with new features of Apache Lucene/Solr and recommend improvements. Required Skills & Qualifications: Strong experience in Apache Solr & SolrCloud Good understanding of Lucene , inverted index , analyzers , tokenizers , and search relevance tuning . Proficient in Java or Python for backend integration and development. Experience with RESTful APIs , data pipelines, and real-time indexing. Familiarity with Zookeeper , Docker , Kubernetes (for SolrCloud deployments). Knowledge of JSON , XML , and schema design in Solr. Experience with log analysis , performance tuning , and monitoring tools like Prometheus/Grafana is a plus. Exposure to e-commerce or document management search use cases is an advantage. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Experience with Elasticsearch or other search technologies is a plus. Working knowledge of CI/CD pipelines and cloud platforms ( Azure).
Posted 2 weeks ago
1.0 - 4.0 years
4 - 7 Lacs
Pune
Work from Office
Job Summary: We are seeking a proactive and detail-oriented Site Reliability Engineer (SRE) focused on Monitoring to join our observability team. The candidate will be responsible for ensuring the reliability, availability, and performance of our systems through robust monitoring, alerting, and incident response practices. Key Responsibilities: Monitor Application, IT infrastructure environment Drive the end-to-end incident response and resolution Design, implement, and maintain monitoring and alerting systems for infrastructure and applications. Continuously improve observability by integrating logs, metrics, and traces into a unified monitoring platform. Collaborate with development and operations teams to define and track SLIs, SLOs, and SLAs. Analyze system performance and reliability data to identify trends and potential issues. Participate in incident response, root cause analysis, and post-mortem documentation. Automate repetitive monitoring tasks and improve alert accuracy to reduce noise. Required Skills & Qualifications: 2+ years of experience in application/system monitoring, SRE, or DevOps roles. Proficiency with monitoring tools such as Prometheus, Grafana, ELK, APM, Nagios, Zabbix, Datadog, or similar. Strong scripting skills (Python, Bash, or similar) for automation. Experience with cloud platforms (AWS, Azure) and container orchestration (Kubernetes). Solid understanding of Linux/Unix systems and networking fundamentals. Excellent problem-solving and communication skills.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You are a Senior Backend Developer with a strong expertise in GoLang, MongoDB, and Kubernetes, sought by Grexa AI Pvt Ltd, located in Vashi, Navi Mumbai. Your primary responsibility will be to develop and optimize scalable backend services that support AI Marketing Agents. Your key responsibilities include developing and maintaining high-performance backend systems using GoLang, optimizing MongoDB queries and schema design for scalability, deploying, managing, and scaling services on Kubernetes in GCP/AWS, implementing OAuth2-based authentication and API security, integrating RESTful APIs with AI-powered services, setting up monitoring tools such as Prometheus and Kibana, and establishing CI/CD pipelines for efficient deployments. Additionally, you will be accountable for ensuring high availability, fault tolerance, and security of backend services. To qualify for this role, you should have at least 3 years of backend development experience and possess a strong expertise in GoLang, MongoDB, and Kubernetes. Experience with Docker, GCP/AWS, and microservices architecture is also required. Proficiency in OAuth2, API authentication, security best practices, familiarity with monitoring tools like Prometheus and Kibana, as well as experience in CI/CD pipelines and DevOps practices are essential. Strong problem-solving skills and the ability to thrive in a fast-paced startup environment are vital for success in this role. Additionally, having AI Agent Building Experience, familiarity with gRPC and Protocol Buffers (ProtoBufs) for efficient service communication, and experience with PostgreSQL or other relational databases are considered nice-to-have qualifications.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You have an exciting opportunity to join our team as a Server Management Administrator with 3-6 years of experience in Linux server administration. As a Server Management Administrator, you will be responsible for server installation, maintenance, decommissioning, and troubleshooting in both physical and virtual environments. Your expertise in Linux OS operations, hypervisor operations (XEN and VMware), LVM, DNS, LDAP, and IPTables will be crucial in ensuring the smooth operation of our server landscape. In addition to your Linux proficiency, you should have a basic understanding of at least one public cloud platform such as AWS, Azure, or GCP. Knowledge of automation tools like Chef, scripting languages like bash and Perl, and version control systems like GitHub will be advantageous. Familiarity with monitoring tools such as Prometheus and ELK stack will also be beneficial. Your role will involve providing 3rd level technical support in alignment with customer SLAs, following ITIL processes for daily compute operations, and handling server-related tasks such as storage operations, patch management, and performance analysis. You will also be expected to collaborate with 2nd level technical support on complex issues and participate in an on-call rotation if required. As part of the NTT DATA Business Solutions team, you will have the opportunity to work with cutting-edge technologies and contribute to the transformation of SAP solutions into business value. If you are passionate about server management, automation, and cloud technologies, we invite you to join us on this exciting journey. For further details or inquiries regarding this position, please feel free to reach out to our Recruiter: Recruiter Name: Santhosh Koyada Recruiter Email ID: santhosh.koyada@bs.nttdata.com Join us at NTT DATA Business Solutions and be part of a dynamic and innovative IT company that is at the forefront of SAP solutions and services.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will lead the development of high-performance backend services using Java and Spring Boot, designing and building reliable and scalable REST APIs and microservices. Taking ownership of features and system components throughout the software lifecycle will be your responsibility. You will also design and implement CI/CD workflows using tools like Jenkins or GitHub Actions, contributing to architectural decisions, code reviews, and system optimizations. Your expertise in Java and advanced experience with the Spring Boot framework will be essential, along with proven experience in building and scaling REST APIs and microservices. Hands-on experience with CI/CD automation and DevOps tools is required, as well as working knowledge of distributed systems, cloud platforms, and Kafka. A strong understanding of system design, performance optimization, and best coding practices is crucial for this role. Proficiency in Docker and Kubernetes for containerized deployments, exposure to NoSQL databases such as MongoDB and Cassandra, and experience with configuration server management and dynamic config updates are nice-to-have skills. Familiarity with monitoring and logging tools like Prometheus, ELK Stack, or others, along with awareness of cloud security standards, observability, and incident management will be beneficial. This is a full-time position with benefits including Provident Fund. The work schedule is during the day shift, and the job requires at least 5 years of experience in Java Developer, Docker and Kubernetes, NoSQL databases MongoDB, Cassandra, Kafka, Spring Boot framework, Jenkins, GitHub, REST APIs, system design, cloud architectures and microservices, monitoring and logging tools, and awareness of cloud security. Work location is in person.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology, you will be at the center of a rapidly growing field in technology. Your role involves applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. You will be responsible for solving complex business problems with simple solutions through code and cloud infrastructure. Your tasks will include configuring, maintaining, monitoring, and optimizing applications and their associated infrastructure. You will play a vital role in decomposing and iteratively improving existing solutions, contributing significantly to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of applications or platforms. Responsibilities: - Guide and assist others in building appropriate level designs and gaining consensus from peers - Collaborate with software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines - Design, develop, test, and implement availability, reliability, scalability, and solutions in applications - Implement infrastructure, configuration, and network as code for applications and platforms - Collaborate with technical experts, key stakeholders, and team members to resolve complex problems - Understand service level indicators and utilize service level objectives to proactively resolve issues - Support the adoption of site reliability engineering best practices within the team Required Qualifications: - Formal training or certification on software engineering concepts with 3+ years of applied experience - Proficiency in site reliability culture and principles, and familiarity with implementing site reliability within an application or platform - Proficiency in at least one programming language such as Python, Java/Spring Boot, and .Net - Knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) - Experience in observability tools like Grafana, Dynatrace, Prometheus, Datadog, Splunk, etc. - Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform - Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker - Familiarity with troubleshooting common networking technologies and issues - Ability to contribute to large and collaborative teams with limited supervision - Proactive recognition of roadblocks and interest in learning innovative technologies - Ability to identify new technologies and relevant solutions to meet design constraints Preferred Qualifications: - Familiarity with popular IDEs for Software Development - General knowledge of the financial services industry (preferred),
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Database Administrator at our company based in Chennai, you will be responsible for managing, monitoring, optimizing, and supporting PostgreSQL, MySQL, SQL Server, and Redis (ElastiCache) in both production and development environments. Your role will include providing 16/7 on-call support for database incidents to ensure high availability and reliability. In this position, you will also be expected to have knowledge of AWS services such as RDS, Aurora, DynamoDB, ElastiCache, and Redshift. You will play a key role in deploying and managing these AWS database services. Additionally, you will be involved in automating database provisioning and management using Infrastructure as Code (IaC) tools like Terraform. Performance optimization will be a crucial aspect of your role, where you will be required to tune queries, indexing, partitioning, and troubleshoot slow queries and replication issues. You will also be responsible for implementing tools like Flyway, Liquibase, or Alembic for zero-downtime database migrations. As part of the job, you will need to learn and provide support for a variety of SQL and NoSQL databases beyond PostgreSQL and MySQL. Automation will be a key focus, where you will utilize tools like Prometheus and CloudWatch for database health checks and automation. Collaboration and documentation are essential in this role. You will collaborate with developers, DevOps, and SRE teams to support database needs and document best practices to ensure efficient operations. If you are passionate about database management, automation, and optimizing performance, and if you have the required skills in PostgreSQL, MySQL, and AWS services, we would like to hear from you. Thank you for considering this opportunity.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
As a Senior Software Developer specializing in React, AWS, and DevOps, your role in Perth will involve utilizing your hands-on experience with React/Angular applications. You will be responsible for the setup, maintenance, and enhancement of cloud infrastructure for web applications, leveraging your expertise in AWS Cloud services. Your responsibilities will include understanding and implementing core AWS services, ensuring the application's security and scalability by adhering to best practices. You will be expected to establish and manage the CI/CD pipeline using the AWS CI/CD stack, while also demonstrating proficiency in BDD/TDD methodologies. In addition, your role will require expertise in serverless approaches using AWS Lambda and the ability to write infrastructure as code using tools like CloudFormation. Knowledge of Docker and Kubernetes will be advantageous, along with a strong understanding of security best practices such as IAM Roles and KMS. Furthermore, experience with monitoring solutions like CloudWatch, Prometheus, and the ELK stack will be beneficial in ensuring the performance and reliability of the applications you work on. Additionally, having a good understanding of DevOps practices will be essential for success in this role. If you have any further inquiries or require clarification on any aspect of the job, please do not hesitate to reach out.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at our organization, you will have the opportunity to work on building smart, automated testing solutions. We are seeking individuals who are passionate about data engineering and eager to contribute to our growing team. Ideally, you should hold a Bachelor's or Master's degree in Computer Science, IT, or equivalent field, with a minimum of 4 to 8 years of experience in building and deploying complex data pipelines and data solutions. For junior profiles, a similar educational background is preferred. Your responsibilities will include deploying data pipelines using technologies like Databricks, as well as demonstrating hands-on experience with Java and Databricks. Additionally, experience with visualization software such as Splunk (or alternatives like Grafana, Prometheus, PowerBI, Tableau) is desired. Proficiency in SQL and Java, along with hands-on experience in data modeling, is essential for this role. Familiarity with Pyspark or Spark for managing distributed data is also expected. Knowledge of Splunk (SPL), data schemas (e.g., JSON/XML/Avro), and deploying services as containers (e.g., Docker, Kubernetes) will be beneficial. Experience working with cloud services, particularly Azure, is advantageous. Familiarity with streaming and/or batch storage technologies like Kafka and data quality management and monitoring will be considered a plus. Strong communication skills in English are essential for effective collaboration within our team. If you are excited about this opportunity and possess the required qualifications, we encourage you to connect with us by sending your updated CV to nivetha.s@eminds.ai. Join us and become a part of our exciting journey!,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France