Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
20 - 22 Lacs
pune
Work from Office
Core Java, Collections skills, Java 8, Lambda, Streams, Spring Framework (Core / Boot / Integration), Apache Flink, Apache Kafka, ELK stack, Elasticsearch, Logstash & Kibana, BPMN/CMMN, Angular/JavaScript / React / Redux, CI / CD, Git, agile SDLC
Posted 1 week ago
4.0 - 5.0 years
8 - 12 Lacs
gurugram
Work from Office
Position Overview : We are seeking an SRE to join our high-impact platform engineering team. You will maintain SLAs for real-time services deployed across hybrid clouds and Kubernetes clusters, contributing to automation, observability, and availability goals. Roles and Responsibilities : - Monitor application and infrastructure metrics; build dashboards and alerts (Prometheus, Grafana, ELK). - Automate health checks, incident remediation, and reliability guardrails. - Manage on-call rotations, conduct root cause analysis, and implement postmortem action plans. - Define and track SLOs, SLIs, and error budgets. - Use chaos engineering and resilience testing to ensure fault tolerance. Must Have Skills : - 4 - 5 years of experience in managing production-grade Kubernetes clusters and cloud-native platforms. - Proficiency in Linux system internals, containers, and networking. - Scripting/automation expertise in Python/Go/Shell. - Familiarity with incident management, runbooks, and observability standards. - Exposure to service discovery, DNS routing, and load balancing is a bonus. Qualification : BE/BTech/MCA/ME/MTech/MS in Computer Science or a related technical field or equivalent practical experience.
Posted 1 week ago
8.0 - 13.0 years
18 - 22 Lacs
pune
Hybrid
Java Full stack java software engineer with 8+ years of experience strong in Core Java, Collections skills with 8+ years of experience. Preferred experience on Java 8 features such as Lambda Expressions, Streams etc. extensive experience on Spring Framework (Core / Boot / Integration) good knowledge of the design patterns applicable to data streaming experience of Apache Flink/Apache Kafka and the ELK stack are highly desirable (Elasticsearch, Logstash & Kibana) experience of Flowable or similar BPMN/CMMN tooling also highly desirable knowledge of front end technologies like Angular/JavaScript / React / Redux also applicable familiarity with CI / CD (TeamCity / Jenkins), Git / GitHub /GitLab familiarity with Docker/containerization technologies familiarity with Microsoft Azure proven track record in an agile SDLC in a large scale enterprise environment knowledge of Post trade processing in large financial institutions an added bonus!
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You are seeking a skilled ELK (Elasticsearch, Logstash, Kibana) Administrator to construct and oversee the ELK stack from the ground up. The ideal candidate will possess a robust proficiency in ELK cloud deployments, Kubernetes at the cluster level, Infrastructure as Code (IaC), and automation tools. Your primary responsibilities will include deploying, setting up, and managing Elasticsearch clusters, indexing strategies, and Kibana spaces. Additionally, you will be expected to implement SAML-based authentication, role-based access controls, monitor ELK cluster performance, automate ELK stack deployments using Terraform, Ansible, and HELM Charts, develop CI/CD pipelines, and leverage Elasticsearch APIs for advanced automation. You will collaborate closely with DevOps and security teams to integrate ELK with enterprise applications and uphold best practices for log management, security, and infrastructure as code. The successful candidate should have expertise in ELK stack administration, configuration, and querying, ELK cloud setup and management, Kubernetes at the cluster level, Infrastructure as Code with Terraform, Git proficiency, CI/CD tools like GitHub Actions, Linux administration, VS Code usage, Ansible, HELM Charts, SAML mapping, performance optimization, Elasticsearch queries & APIs, infrastructure automation, and SRE practices. UST is a global digital transformation solutions provider that partners with leading companies worldwide to drive impactful transformations. With a workforce of over 30,000 employees across 30 countries, UST focuses on embedding innovation and agility into client organizations to touch billions of lives.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Java Enterprise Technical Architect, you will be responsible for designing and deploying scalable, high-performance microservices architecture. Your expertise in cloud computing, DevOps, security, and database optimization will be vital in ensuring the efficiency and security of enterprise applications. You will play a key role in fixing VAPT vulnerabilities, suggesting deployment architectures, and implementing clustering and scalability solutions. Your hands-on experience in coding with Java, Spring Boot, and microservices will be crucial in leading architecture decisions while ensuring best practices in software development. Your responsibilities will include designing secure and efficient deployment architectures, optimizing enterprise applications for security, and providing recommendations for cloud-native architectures on AWS, Azure, or GCP. You will also be responsible for fixing VAPT issues, implementing end-to-end security measures, and ensuring database security with encryption and access control mechanisms. Performance optimization, scalability, DevOps, and cloud enablement will be key focus areas where your expertise will be required. In addition to technical leadership and hands-on development, you will review and improve code quality, scalability, and security practices across development teams. You will mentor developers, conduct training sessions, and define architecture patterns and coding standards to ensure high-quality, scalable, and secure applications. Collaboration with stakeholders, evaluation of technologies, and staying updated with emerging trends will be essential in driving innovation and ensuring the security, performance, and reliability of system architecture. To qualify for this role, you should have 10+ years of hands-on experience in Java full-stack, Spring Boot, J2EE, and microservices, along with 5+ years of expertise in designing enterprise-grade deployment architectures. A strong security background, network design skills, and deep knowledge of application servers are required. Strong experience in database performance tuning, DevOps, cloud platforms, and containerization technologies will be necessary. Effective communication, problem-solving, and analytical skills will be essential to work closely with technical and non-technical stakeholders. A Bachelor's degree or Master's degree in computer science, Information Technology, or a related field is required to ensure a strong educational foundation for this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have a strong understanding of the ELK (Elasticsearch, Logstash, and Kibana) stack, which is an end-to-end platform that provides real-time insights from various types of data sources. Your responsibilities will include working with APIs, shards, and Elasticsearch, writing parsers in Logstash, and creating dashboards in Kibana. You must possess a deep knowledge of log analytics and have expert-level experience in Elasticsearch, Logstash, and Kibana. Specifically, you should be proficient in writing Kibana APIs, creating parsers in Logstash, and developing visualizations and dashboards in Kibana based on client requirements. Additionally, you should be able to write scripts in Linux. Key Skills: Kibana, Logstash, Elasticsearch, API, ELK Stack, Linux.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be a valuable member of the global Services team, providing support to worldwide clients. Your responsibilities will include building and maintaining database environments for data factory projects, assisting in the development and enhancement of tools and utilities for the Data Factory, offering on-demand support on database-related issues to clients, and providing development support for ongoing projects to Consulting Services clients. To be successful in this role, you should have a Bachelor's degree in Computer Science (with a background in math being a plus) and 4-7 years of experience in software development, consulting, or support. Proficiency in at least one of the following languages (Java, Python, Scala), a strong understanding of relational, graph, or time series databases, experience with the ELK stack, including Kibana querying and Elasticsearch index management, as well as familiarity with log aggregation and analysis are essential. Experience in Regex, Kibana visualization strategies, Elasticsearch index configuration, ansible, Linux diagnostics and tuning, and working with databases like Neo4j, Oracle, MS SQL Server, PostgreSQL will be beneficial. Additionally, experience with public cloud technologies (AWS, Azure, GCP), performance tuning, and troubleshooting skills are required. DTA India offers a competitive salary based on experience, along with bonuses, health and term insurance, and a generous leave package. This is a full-time position primarily scheduled from 10AM to 7PM IST, with occasional weekend work required. The work location is in Noida (Delhi-NCR region). To apply for this position, please send your CV, salary expectations, and a cover letter to jobs@dtainc.in.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Solution Engineer-Backend Developer at Aptiv, you will play a crucial role in shaping the future of mobility by working towards a world with zero vehicle accidents, zero vehicle emissions, and widespread wireless vehicle connectivity. In collaboration with our passionate team of engineers and developers, you will contribute to creating advanced safety systems, high-performance electrification solutions, and data connectivity solutions to enable sustainable mobility and reduce accidents caused by human error. Your responsibilities will include working closely with Architects and Developers to maintain, extend, and create solutions used globally. You will be an integral part of Development and Operations, contributing to building and enhancing solutions for our groundbreaking platform hosted in multiple clouds. Serving as a central contact for internal stakeholders, you will resolve issues by utilizing or expanding our tooling landscape and deploying innovative solutions to a game-changing CI platform. Additionally, you will participate in technical discussions with our Agile team, ensure compliance with data privacy requirements, and adhere to coding standards and guidelines throughout the software development process. Debugging, troubleshooting, and fixing bugs will also be part of your day-to-day tasks. To excel in this role, you should hold a Bachelors(BE) / Masters(MS) degree in a technical discipline and possess at least 5 years of experience in writing software using scripting languages, preferably Typescript and Python, on Linux. Experience with DevOps tools such as GIT, CI/CD, Docker, and Ansible, as well as working with cross-functional global teams in a dynamic environment, is essential. Proficiency in English, excellent written and verbal communication skills, and good interpersonal skills are required. Additional experience in technologies like MERN Stack, RESTful APIs, Databases (SQL, NoSQL), React, and ELK Stack, along with programming languages like JavaScript/Typescript, Rust, and C++, will be advantageous. Experience with asynchronous programming and cloud infrastructure such as Azure, AWS, and Google Cloud is preferred. At Aptiv, you will have the opportunity to grow in an inclusive work environment that values individual development and impact. Safety being a core value, Aptiv aims for a world with zero fatalities, zero injuries, and zero accidents, contributing to a safer future for all. We provide resources and support for your family, physical health, and mental well-being through a competitive health insurance package. Your benefits at Aptiv include hybrid and flexible working hours, higher education opportunities, life and accident insurance, Sodexo cards for food and beverages, a Well-Being Program with regular workshops and networking events, EAP Employee Assistance, access to fitness clubs, and a creche facility for working parents. Join Aptiv today and be part of the team that is changing tomorrow. Apply now and contribute to a safer and more connected world!,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Microservices Support Lead in the Java/Spring Boot/Azure Spring Apps platform, you will play a crucial role in analyzing and resolving customer reported issues related to the solution. Your responsibilities will include overseeing the operational stability and performance of microservices-based applications hosted in Azure Spring Apps platform. You will lead a team of support engineers, collaborate with development teams, and ensure the reliability and scalability of the microservices architecture. Key Responsibilities: - Triage issues daily, collaborate with onsite architect and customer to unblock support engineers. - Lead and mentor a team of microservices support engineers, providing guidance in issue resolution. - Collaborate with development teams to understand the original design and architecture. - Monitor Azure Dashboards for the health and performance of microservices applications. - Utilize Database basics to query, index, and measure performance with respect to data access. - Familiarity with Spring JMS, JDBC, Azure Data Factory pipelines, Azure App config, Redis Cache, and Azure App Insights. - Work with pgAdmin in PostgreSQL and API testing tools like Postman for issue troubleshooting. - Assist Dev Ops team with microservices deployment. - Contribute to creating standard operating procedures, runbooks, release notes, and training materials. - Partner with stakeholders to provide technical solutions that permanently resolve issues. - Coordinate with cross-functional teams during incident response and problem resolution. - Adhere to service level objectives (SLOs) in responding to and fixing customer raised issues. - Stay updated on industry trends and emerging technologies related to microservices and cloud-native applications. Requirements: - Bachelor's degree in Computer Science, Engineering, or related field. - 8+ years of experience in a technical leadership role supporting microservices-based applications. - Deep understanding of microservices architecture, Azure Spring Apps, containerization, and cloud platforms. - Experience with monitoring tools, DevOps practices, Java, Spring Boot Microservices, and PostgreSQL. - Strong troubleshooting and problem-solving skills. - Proficiency in CI/CD pipelines and automated deployment. - Excellent communication and interpersonal skills. - Strong understanding of Azure/Spring Boot.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Job Description: As a Kubernetes Subject Matter Expert (SME) at Saika Technologies Pvt Ltd., you will be responsible for designing, deploying, and managing Kubernetes clusters. This full-time hybrid role based in Hyderabad offers some work-from-home flexibility. Your primary tasks will include troubleshooting issues, ensuring high availability and scalability, implementing security best practices, and collaborating with development teams to support containerized applications. The ideal candidate for this position should hold Certified Kubernetes Administrator (CKA) or Certified Kubernetes Security Specialist (CKS) certifications. You should have expertise in Kubernetes, container orchestration, and microservices architecture. Additionally, experience with Docker and containerization, strong knowledge of cloud platforms like AWS and Azure, proficiency in Linux administration and scripting, and an understanding of CI/CD pipelines and tools such as Jenkins or GitLab CI are required. Familiarity with monitoring and logging tools such as Prometheus, Grafana, and ELK stack is preferred. You should possess excellent problem-solving skills, attention to detail, and the ability to work both independently and in a team environment. A Bachelor's degree in Computer Science, Engineering, or a related field would be beneficial for this role.,
Posted 1 week ago
5.0 - 10.0 years
7 - 15 Lacs
thiruvananthapuram
Hybrid
We are seeking a skilled and motivated DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for implementing and managing the deployment, automation, and maintenance of our applications and infrastructure. This role requires a strong understanding of both development and operations processes, with the ability to collaborate effectively across teams to ensure seamless integration and delivery of software solutions. Experience, Technical and Functional Skills :- • Design, implement, and maintain CI/CD pipelines to automate the software delivery process. • Collaborate with development teams to ensure applications are designed for scalability and reliability. • Manage cloud infrastructure and services, ensuring optimal performance and cost efficiency. • Monitor system performance and troubleshoot issues to ensure high availability and reliability. • Implement security best practices and ensure compliance with industry standards. • Develop and maintain scripts and tools to automate operational tasks. • Proven experience as a DevOps Engineer or in a similar role. • Strong knowledge of cloud platforms such as AWS, Azure, or Google Cloud. • Experience with containerization technologies like Docker and orchestration tools like Kubernetes. • Proficiency in scripting languages such as Python, Bash, or PowerShell. • Familiarity with configuration management tools like Ansible, Puppet, or Chef. • Experience with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, etc. • Understanding of networking concepts and protocols. • Knowledge of Agile and DevOps methodologies and practices. • Certification in cloud platforms or DevOps tools is a plus.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As a DevOps Specialist at our company based in Kochi, you will play a crucial role in ensuring the smooth functioning of our systems. With over 5 years of experience, you will be responsible for proactive performance and security measures to maintain our systems" efficiency akin to a well-oiled CI/CD pipeline. Your daily tasks will include conducting system health checks, monitoring logs, tracking incidents, and providing support for CI/CD pipelines (Jenkins, GitLab), containers (Docker, K8s), and cloud platforms (AWS/Azure). You will be utilizing tools like Prometheus, Grafana, and AppDynamics to ensure visibility and collaborating on change/problem management and process enhancement. Additionally, you will be expected to maintain clear documentation and effectively communicate with stakeholders. To excel in this role, you should have hands-on experience with Linux, ELK Stack, ServiceNow, and security tools such as SonarQube, Black Duck, and OWASP. Strong communication skills, excellent troubleshooting abilities, and familiarity with Life Sciences, SCRUM, or ITIL methodologies are highly valued. If you are a proactive problem solver who thrives in a fast-paced environment and possesses the required technical skills and knowledge, we encourage you to apply. Connect with us by sending your resume to m.neethu@ssconsult.in or reach out for more details. Join us in this exciting opportunity to contribute to our dynamic team and make a meaningful impact in the world of DevOps. #DevOpsJobs #KochiJobs #HiringNow #CI/CD #Docker #Kubernetes #AWS #Azure #TechCareers #DevOpsSpecialist,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
You are a highly skilled and motivated Site Reliability Engineer (SRE) with a background in software development, particularly in Python or Go, and extensive experience with DevOps tools and practices. Your primary responsibility will be to ensure the reliability, scalability, and performance of systems and services. Your key responsibilities include designing, implementing, and maintaining CI/CD pipelines to automate deployment processes, monitoring and managing infrastructure for high availability and performance, developing scripts and tools for automation and monitoring, troubleshooting and resolving issues related to infrastructure and applications, implementing security measures to protect data and infrastructure, conducting performance tuning, capacity planning, and participating in on-call rotations for critical systems support. Qualifications for this role include proven experience as a Site Reliability Engineer or similar, proficiency in Python or Go programming languages, strong experience with DevOps tools such as Jenkins, Git, Docker, Kubernetes, and Terraform, solid understanding of CI/CD principles and practices, familiarity with infrastructure as code (IaC) and configuration management tools, knowledge of security best practices and tools, and experience with monitoring and logging tools like Prometheus, Grafana, and ELK stack. Excellent problem-solving skills, attention to detail, and strong communication and collaboration skills are essential. You will work with various tools such as Python and Go for programming and scripting, Gerrit, GitHub, and GitLab for version control, Spinnaker, Jenkins, and GitLab CI for CI/CD, Terraform, AWS CloudFormation, Ansible, Puppet, and Chef for Infrastructure as Code (IaC), AWS, Azure, and Google Cloud for cloud platforms, Docker and Kubernetes for containers and orchestration, VPN, Firewalls, Load balancing, TLS/SSL, IAM, and VPC for networking and security, SQL, NoSQL (MongoDB, Redis), and OpenSearch for databases, Ansible, Puppet, and Chef for automation, SaltStack, Puppet, and Chef for configuration management, Agile/Scrum, Kanban, Lean for DevOps practices, Jira, Confluence, Slack for collaboration and communication, and Black Duck for security.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
haryana
On-site
You are seeking a Head of Architecture role that involves defining and driving the end-to-end architecture strategy for REA India, a leading real estate technology platform in India. As the Head of Architecture, your primary focus will be on scalability, security, cloud optimization, and AI-driven innovation. This leadership position requires mentoring teams, enhancing development efficiency, and collaborating with REA Group leaders to align with the global architectural strategy. Your key responsibilities in this role will include: - Architectural Leadership: Documenting key technical choices, implementing scalable and secure architectures across Housing and PropTiger, and aligning technical decisions with business goals using microservices, distributed systems, and API-first design. - Cloud & DevOps Excellence: Optimizing cloud infrastructure for cost, performance, and scalability, improving SEO performance, and enhancing CI/CD pipelines, automation, and Infrastructure as Code (IaC) to accelerate delivery. - Security & Compliance: Establishing and enforcing security best practices for data protection, identity management, and compliance, as well as strengthening security posture through proactive risk mitigation and governance. - Data & AI Strategy: Architecting data pipelines and AI-driven solutions for automation and data-driven decision-making, leading Generative AI initiatives to enhance product development and user experiences. - Incident Management & Operational Excellence: Establishing best practices for incident management, driving site reliability engineering (SRE) principles to improve uptime, observability, and performance monitoring. - Team Leadership & Mentorship: Mentoring engineering teams to foster a culture of technical excellence, innovation, and continuous learning, as well as collaborating with product and business leaders to align technology roadmaps with strategic objectives. The ideal candidate for this role should have: - 12+ years of experience in software architecture, cloud platforms (AWS/GCP), and large-scale system design. - Expertise in microservices, API design, DevOps, CI/CD, and cloud cost optimization. - Strong background in security best practices and governance. - Experience in Data Architecture, AI/ML pipelines, and Generative AI applications. - Proven leadership skills in mentoring and developing high-performing engineering teams. - Strong problem-solving, analytical, and cross-functional collaboration skills. By joining REA India in the Head of Architecture role, you will have the opportunity to: - Build and lead high-scale real estate tech products. - Drive cutting-edge AI and cloud innovations. - Mentor and shape the next generation of top engineering talent. In summary, as the Head of Architecture at REA India, you will play a crucial role in shaping the architecture strategy, driving innovation, and leading a team of talented individuals towards achieving the company's vision of changing the way India experiences property.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
As a Senior Lead Engineer in the Machine Learning Experience (MLX) team at Capital One India, you will be an integral part of a dynamic team focused on observability and model governance automation for cutting-edge generative AI use cases. Your role will involve architecting and developing full-stack solutions to monitor, log, and manage generative AI and machine learning workflows and models. You will collaborate with model and platform teams to create systems that collect metadata and insights to ensure ethical use, data integrity, and compliance with industry standards for Gen-AI. In this role, you will work on building core APIs and SDKs for observability of LLMs and proprietary Foundation Models, including training, pre-training, fine-tuning, and prompting. Leveraging your expertise in observability tools such as Prometheus, Grafana, ELK Stack, or similar, you will adapt them for Gen AI systems. Your responsibilities will also include partnering with product and design teams to develop advanced observability tools tailored to Gen-AI and using cloud-based architectures and technologies to deliver solutions that provide deep insights into model performance, data flow, and system health. Additionally, you will collaborate with cross-functional Agile teams, data scientists, ML engineers, and other stakeholders to understand requirements and translate them into scalable and maintainable solutions. Your role will involve using programming languages like Python, Scala, or Java, as well as applying continuous integration and continuous deployment best practices to ensure successful deployments of machine learning models and application code. To be successful in this role, you should have a Master's Degree in Computer Science or a related field, along with a minimum of 12 years of experience in software engineering and solution architecture. You should also possess at least 8 years of experience in designing and building data-intensive solutions using distributed computing, as well as programming experience with Python, Go, or Java. Proficiency in observability tools and Open Telemetry, as well as excellent communication skills to articulate complex technical concepts to diverse audiences, are essential for this position. Moreover, experience in developing and deploying ML platform solutions in public cloud environments such as AWS, Azure, or Google Cloud Platform will be advantageous. If you are passionate about leveraging advanced analytics, data science, and machine learning to drive business innovation and are looking to be part of a team that is at the forefront of ML model management and deployment, we encourage you to apply for this exciting opportunity at Capital One India.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a DevOps Engineer with over 5 years of experience and excellent communication skills, you will be responsible for various key tasks to ensure the smooth functioning of the infrastructure and deployment processes. Your primary responsibilities will include designing, building, and maintaining automation tools for cloud infrastructure, developing and managing CI/CD pipelines, and ensuring the scalability, security, and high availability of cloud resources and services. You will work on implementing monitoring tools such as Prometheus, Grafana, and ELK stack to track performance, uptime, and system health, as well as collaborate closely with software developers to streamline development processes and align infrastructure requirements. Using configuration management tools like Ansible, Chef, or Puppet, you will automate environment provisioning and manage Git repositories for seamless code collaboration and integration. In addition to infrastructure management, you will be responsible for ensuring high-security standards and best practices are met, responding to and troubleshooting production issues, and minimizing downtime through rapid issue identification and resolution. Your skills and qualifications should include experience with cloud platforms like AWS, Google Cloud, or Azure, proficiency in scripting languages such as Bash or Python, and knowledge of containerization tools like Docker and Kubernetes. Preferred qualifications for this role include experience with microservices architecture, knowledge of serverless computing, an understanding of network security principles like VPNs, firewalls, and load balancing, and familiarity with agile and DevOps methodologies. Your strong problem-solving and communication skills will be essential in collaborating with cross-functional teams to achieve the company's goals.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
As a member of Upcycle Reput Tech Pvt Ltd, a blockchain-based startup committed to sustainability and environmental impact reduction, your primary responsibility will be to assist in deploying, managing, and monitoring cloud infrastructure on AWS. You will support various AWS services such as EC2, S3, RDS, Lambda, IAM, VPC, and CloudWatch, ensuring seamless operations. Your role will involve implementing basic CI/CD pipelines using tools like AWS CodePipeline and CodeBuild to streamline development processes. Furthermore, you will play a crucial role in automating infrastructure using Terraform or CloudFormation and ensuring adherence to security best practices in AWS configurations, including IAM policies and VPC security groups. Familiarity with Docker and Kubernetes for containerized deployments is preferred, along with supporting log analysis and monitoring using tools like CloudWatch, ELK, or Prometheus. Your ability to document system configurations and troubleshooting procedures will be essential in maintaining operational efficiency. To excel in this role, you should possess 1-2 years of hands-on experience with AWS cloud services, a solid understanding of AWS EC2, S3, RDS, Lambda, IAM, VPC, and CloudWatch, as well as basic knowledge of Linux administration and scripting using Bash or Python. Experience with Infrastructure as Code (IaC), CI/CD pipelines, and Git version control is required, along with a grasp of networking concepts and security protocols. Preferred qualifications include AWS certifications like AWS Certified Cloud Practitioner or AWS Certified Solutions Architect - Associate, familiarity with serverless computing and monitoring tools like Prometheus, Grafana, or ELK Stack. Strong troubleshooting skills, effective communication, and the ability to work collaboratively are key attributes for success in this role. In return, Upcycle Reput Tech Pvt Ltd offers a competitive salary and benefits package, the opportunity to engage with exciting and challenging projects, a collaborative work environment, and professional development opportunities. Join us in our mission to integrate sustainable practices into the supply chain and drive environmental impact reduction across industries.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior DevOps Engineer at HolboxAI, located in Ahmedabad, India, you will play a crucial role in designing, implementing, and managing our AWS cloud infrastructure. Your primary focus will be on automating deployment processes and migrating production workloads to AWS while ensuring robust, scalable, and resilient systems that adhere to industry best practices. Your key responsibilities will include leading the assessment, planning, and execution of AWS production workload migrations with minimal disruption, building and managing Infrastructure as Code (IaC) using tools like Terraform or CloudFormation, optimizing CI/CD pipelines for seamless deployment through Jenkins, GitLab CI, or AWS CodePipeline, implementing proactive monitoring and alerting using tools such as Prometheus, Grafana, ELK Stack, or AWS CloudWatch, and enforcing best practices in cloud security, including access controls, encryption, and vulnerability assessments. To excel in this role, you should have 3 to 5 years of experience in DevOps, Cloud Infrastructure, or SRE roles with a focus on AWS environments. Proficiency in core AWS services like EC2, S3, RDS, VPC, IAM, Lambda, and cloud migration is essential. You should also be skilled in Terraform or CloudFormation, proficient in scripting and automation using Python or Bash, experienced with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline, familiar with monitoring tools such as Prometheus, Grafana, ELK Stack, or AWS CloudWatch, and possess a strong understanding of cloud security principles. Preferred qualifications for this role include certifications such as AWS Certified Solutions Architect and AWS Certified DevOps Engineer, expertise in Docker and Kubernetes for containerization, familiarity with AWS Lambda and Fargate for serverless applications, and skills in performance optimization focusing on high availability, scalability, and cost-efficiency tuning. At HolboxAI, you will have the opportunity to work on cutting-edge AI technologies in a collaborative environment that encourages open knowledge sharing and rapid decision-making. Additionally, you will benefit from generous sponsorship for learning and research initiatives. If you are excited about this opportunity and ready to contribute to HolboxAI's promising journey, we invite you to apply now and take the next step in your career. Join us in shaping the future of cloud engineering and technology at HolboxAI.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
vadodara, gujarat
On-site
Dharmakit Networks is a premium global IT solutions partner dedicated to innovation and success worldwide. Specializing in website development, SaaS, digital marketing, AI Solutions, and more, we help brands turn their ideas into high-impact digital products. Known for blending global standards with deep Indian insight, we are now stepping into our most exciting chapter yet. Project Ax1 is our next-generation Large Language Model (LLM), a powerful AI initiative designed to make intelligence accessible and impactful for Bharat and the world. Built by a team of AI experts, Dharmakit Networks is committed to developing cost-effective, high-performance AI tailored for India and beyond, enabling enterprises to unlock new opportunities and drive deeper connections. Join us in reshaping the future of AI, starting from India. As a GPU Infrastructure Engineer, you will be at the core of building, optimizing, and scaling the GPU and AI compute infrastructure that powers Project Ax1. Your responsibilities will include designing, deploying, and optimizing GPU infrastructure for large-scale AI workloads, managing GPU clusters across cloud (AWS, Azure, GCP) and on-prem setups, setting up and maintaining model CI/CD pipelines for efficient training and deployment, optimizing LLM inference using TensorRT, ONNX, Nvidia NVCF, and more. You will also be responsible for managing offline/edge deployments of AI models, building and tuning data pipelines to support real-time and batch processing, monitoring model and infra performance for availability, latency, and cost efficiency, and implementing logging, monitoring, and alerting using tools like Prometheus, Grafana, ELK, CloudWatch. Collaboration with AI Experts, ML Experts, backend Experts, and full-stack teams will be essential to ensure seamless model delivery. **Key Responsibilities:** - Design, deploy, and optimize GPU infrastructure for large-scale AI workloads. - Manage GPU clusters across cloud (AWS, Azure, GCP) and on-prem setups. - Set up and maintain model CI/CD pipelines for efficient training and deployment. - Optimize LLM inference using TensorRT, ONNX, Nvidia NVCF, etc. - Manage offline/edge deployments of AI models (e.g., CUDA, Lambda, containerized AI). - Build and tune data pipelines to support real-time and batch processing. - Monitor model and infra performance for availability, latency, and cost efficiency. - Implement logging, monitoring, and alerting using Prometheus, Grafana, ELK, CloudWatch. - Work closely with AI Experts, ML Experts, backend Experts, and full-stack teams to ensure seamless model delivery. **Must-Have Skills And Qualifications:** - Bachelors degree in Computer Science, Engineering, or related field. - Hands-on experience with Nvidia GPUs, CUDA, and deep learning model deployment. - Strong experience with AWS, Azure, or GCP GPU instance setup and scaling. - Proficiency in model CI/CD and automated ML workflows. - Experience with Terraform, Kubernetes, and Docker. - Familiarity with offline/edge AI, including quantization and optimization. - Logging & Monitoring using tools like Prometheus, Grafana, CloudWatch. - Experience with backend APIs, data processing workflows, and ML pipelines. - Experience with Git, collaboration in agile, cross-functional teams. - Strong analytical and debugging skills. - Excellent communication, teamwork, and problem-solving abilities. **Good To Have:** - Experience with Nvidia NVCF, DeepSpeed, vLLM, Hugging Face Triton. - Knowledge of FP16/INT8 quantization, pruning, and other optimization tricks. - Exposure to serverless AI inference (Lambda, SageMaker, Azure ML). - Contributions to open-source AI infrastructure projects or a strong GitHub portfolio showcasing ML model deployment expertise.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You are part of a team that is currently seeking a DevOps - Digital Engineering Lead Engineer to join in Hyderabad, Telangana (IN-TG), India. The ideal candidate is expected to have good experience with the ELK stack, including Kibana and Elastic. Additionally, they should have experience in building dashboards, creating complex queries using ELK, and setting up monitoring dashboards and alerts for SQL DBs, Kafka, Redis, Dockers, and Kubernetes clusters. The candidate should also have experience in setting up Custom Metrics using Open Telemetry, preferably in Java/Spring Boot, and should understand GitHub workflows to create new workflows based on existing ones. NTT DATA, a $30 billion global innovator of business and technology services, is committed to hiring exceptional individuals who want to grow with the organization. As a Global Top Employer, NTT DATA serves 75% of the Fortune Global 100 and helps clients innovate, optimize, and transform for long-term success. With diverse experts in more than 50 countries and a robust partner ecosystem, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure and is part of the NTT Group, which invests in R&D to support organizations and society in transitioning confidently and sustainably into the digital future. Visit us at us.nttdata.com.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Global Senior Solutions Architect for Trusted Call Solutions at TransUnion, you will play a key role in architecting TransUnion's Branded Communications under the TruContact business portfolio. In this position, you will collaborate closely with carriers and Fortune 500 enterprises globally, serving as a strategic Solutions Architect and Sales Engineer simultaneously. Working alongside a team of senior solution architects and product managers, you will focus on designing, developing, and integrating technology solutions for global customers across different business verticals. Your responsibilities will include working closely with internal stakeholders such as Product Managers, Software Developers, Business Development, Sales, Account Management, Marketing, and Standards writing individuals. By leveraging market and global competitive intelligence, you will shape product features, roadmap, and new services to drive adoption based on key metrics and deliver value for carriers and enterprises. Your strategic thinking will be instrumental in framing and executing new initiatives that will shape products and services within TransUnion's Branded Communications group. As an experienced and motivated leader, you will possess a unique blend of technical expertise and strong interpersonal skills, enabling you to become the trusted customer advocate. You will guide customers in understanding best practices around TransUnion's solutions, including both on-prem and cloud computing SaaS models, and how to effectively manage various workloads using microservices-based architecture, serverless, and container technology. In this role, you will serve as a thought leader and collaborate with cross-functional teams to create innovative products and services within Trusted Call Solutions, establishing TransUnion as a market leader in specific verticals. Your contributions will be crucial in exceeding revenue objectives, formulating pricing strategies, participating in deep architectural discussions, evangelizing the value proposition of TransUnion solutions, and advocating for investment as needed. To excel in this position, you should have a Bachelor's degree in Computer Science or a related field, with a Master's degree or MBA considered a plus. You should bring a minimum of 10+ years of system design, implementation, and consulting experience with carriers and Fortune 500 enterprises, supporting various network technologies for distributed real-time applications. Additionally, you should have at least 5+ years of experience in solutions consulting, product management, and building products and services with high availability and scalability in the carrier and enterprise domain. Proficiency in modern programming languages such as Java, C++, Python, Go, and familiarity with frameworks like Spring boot, Spring batch, and Spring Data are essential. Experience with RESTful systems, cloud platforms such as AWS, Azure, or GCP, and building highly scalable solutions based on cloud-native microservices architecture are key requirements for this role. Your ability to think strategically, communicate effectively across all levels, and manage cross-functionally will be crucial in driving the success of TransUnion's solutions. Participation in industry standards organizations and proficiency in data structures, algorithm design, and cryptography will be highly advantageous. This hybrid position requires regular performance of job responsibilities both virtually and in-person at an assigned TransUnion office location for a minimum of two days a week. As a Senior Solutions Architect at TransUnion, you will have the opportunity to make a significant impact by shaping innovative products and services and driving the company's growth in the global market.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
All team members are collectively responsible and autonomous for delivering value end to end. You will actively participate in testing, deployment, and production activities to ensure production stability. Collaboration with other team members, both internal Societe Generale and external vendor consultants, across different locations in China and India is crucial. Your responsibilities will include Kubernetes management, where you will install, configure, and maintain Kubernetes clusters, ensuring high availability and performance of the environments. Implementing security best practices for Kubernetes clusters will be essential. You will be managing on-premise infrastructure to support Kubernetes deployments in China and India. Collaboration with internal infrastructure providers (GTS) to optimize resources and cost is expected. Implementing CI/CD pipelines to support automated deployment processes will be part of your role. Monitoring system performances and reliability, anticipating scaling needs, and troubleshooting and resolving issues related to Kubernetes clusters will also be your responsibilities. You will work closely with the development teams to streamline and improve the developer experience. Participation in on-call rotations to ensure system reliability is required. We are looking for candidates with a minimum of 10 years of experience and strong technical skills. Proficiency in managing Kubernetes clusters on-premises is a must. Experience with CI/CD tools such as Jenkins, Github actions, knowledge of Python, Unix (bash, systemd), and familiarity with the ELK stack (ElasticSearch, Logstash, Kibana) are essential technical skills. Excellent problem-solving and analytical skills, strong communication and collaborative abilities, and the ability to work in a fast-paced agile environment are required soft skills. At Societe Generale, we believe that people are drivers of change, shaping the world of tomorrow through their initiatives. Join us to have a positive impact on the future, be directly involved, grow in a stimulating environment, and develop your expertise. Our employees have the opportunity to dedicate several days per year to solidarity actions during working hours, contributing to sponsoring individuals struggling with their orientation or professional integration, financial education of young apprentices, and sharing skills with charities. We are committed to supporting our Group's ESG strategy by implementing ESG principles in all our activities and policies, translating them into business activities, work environment, and responsible practices for environmental protection. Join us and be part of a diverse and inclusive environment where your contributions make a difference.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
noida, uttar pradesh
On-site
We are seeking a highly motivated Full Stack Engineer with 1-2 years of experience to join our team. The ideal candidate should possess expertise in React.js for frontend development and Java Spring Boot for backend development, as well as experience in SQL, Microservices, Message Queues, and Caching. In this role, you will be working across both the frontend and backend to design scalable, modular, and high-performance applications. Responsibilities Develop responsive and dynamic UI components using React.js, Webpack, and Redux. Implement reusable components to ensure a consistent user experience. Optimize web applications for speed and scalability. Handle API integration with backend services. Work with RESTful APIs to fetch and update data efficiently. Ensure cross-browser compatibility and responsiveness. Implement authentication and authorization mechanisms (JWT, OAuth). Develop and maintain backend services using Core Java and Spring Boot. Implement Microservices Architecture to build scalable and distributed systems. Design and manage relational databases using SQL (e.g., MySQL, PostgreSQL). Integrate and manage message queues (RabbitMQ, Kafka, or ActiveMQ) for asynchronous processing. Implement caching solutions (Redis, Memcached) for better performance. Develop and expose RESTful APIs to interact with frontend applications. Ensure security, performance, and high availability of backend services. Write clean, efficient, and well-documented code while following best practices. Requirements Strong proficiency in React and JavaScript/TypeScript. Familiarity with SPA, Module Federation, or Webpack. Experience with Redux, Context API for state management. Proficiency in HTML5 CSS, 3 and responsive design frameworks like Bootstrap. Experience with API integration using Axios or Fetch API. Familiarity with component-driven UI development and reusable design patterns. Strong knowledge of Core Java and Spring Boot. Good understanding of Microservices Architecture and best practices. Experience with SQL databases (MySQL, PostgreSQL). Familiarity with message queues (RabbitMQ, Kafka, or ActiveMQ). Hands-on experience with caching solutions (Redis, Memcached). Understanding of API development and security best practices (JWT, OAuth). Familiarity with version control (Git, GitHub, GitLab). Experience in writing unit tests (JUnit, Mockito for Java, Jest for React). Basic knowledge of CI/CD pipelines and DevOps tools. Understanding of containerization (Docker, Kubernetes) is a plus. Strong problem-solving and debugging skills. Familiarity with WebSocket for real-time applications. Knowledge of monitoring tools like Prometheus, Grafana, or the ELK stack. This job opportunity is from Big Oh Tech, posted by Anasua Maitra.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a member of the Financial Market Digital Channels team, you will be involved in providing cutting-edge technology solutions to support the bank's Financial Markets business. Specifically, you will work on the bank's proprietary pricing, execution, and trade processing platform. Your role will require a deep understanding of the domain, a scientific approach, and innovative solutions to address the challenges of best serving customers in a highly competitive environment. This is a unique opportunity to join a global organization that collaborates with smart technologists in the financial markets sector. The team culture is characterized by openness, intellectual curiosity, and a fun atmosphere, with ample opportunities for learning and career growth for individuals with high energy and a drive to excel. Your responsibilities will include understanding business and functional requirements, developing frameworks, creating libraries, and writing React components for applications. You will also be expected to propose enhancements to existing frameworks, implement best practices, and stay abreast of the latest trends in UI development and ReactJS. The ideal candidate for this role should have over 8 years of tech experience, preferably with exposure to capital markets front office trading. Proficiency in Core JavaScript and TypeScript is essential, along with experience in developing single-page modular applications, real-time event-driven applications, and latency-sensitive systems. Strong skills in React, NPM/YARN, Webpack, and CSS are required, as well as experience with CI/CD (Azure DevOps preferred), Docker/Openshift, and the ELK stack. You should excel as a UI developer, demonstrating good software design principles and the ability to write robust code with accompanying test suites. Effective communication of implemented solutions and rationale is key, along with familiarity with RxJS and Kafka. Agile experience is a must. Desirable skills for this position include full-stack development with Java, Spring Boot, and microservices, as well as proficiency in English with strong communication skills. The ability to work independently, take ownership of tasks, navigate ambiguity, and collaborate with stakeholders globally is crucial for success in this role. If you are a self-motivated individual with a passion for technology and a drive to excel in a dynamic and challenging environment, we encourage you to apply and be part of our innovative team dedicated to shaping the future of financial markets technology.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You have 6-9 years of experience in the IT industry, with a focus on design and implementation of observability frameworks. Your expertise includes working with Splunk, Kubernetes, Ansible, Python, and ELK Stack (Elastic Search, Logstash, Kibana). In this role, you will be responsible for hands-on implementation and administration of Splunk. You should have a solid understanding of designing and implementing Splunk components such as Indexers, Forwarders, and Search Heads. Additionally, experience with Enterprise Observability tools from various vendors like Open Telemetry, Dynatrace, Splunk, Sahara, and OpenSearch is preferred. Your responsibilities will also include establishing standard methodologies for monitoring, logging, and alerting across distributed infrastructure stacks. Proficiency in RHEL, Kubernetes, Ansible, and Puppet distributions is essential for this role. Familiarity with Public Cloud Observability offerings would be advantageous. Furthermore, having experience in writing test plan automation would be considered a plus. If you are seeking a challenging opportunity where you can leverage your expertise in observability frameworks and tools, this position could be the right fit for you.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |