Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
35 - 50 Lacs
Karimnagar
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Vijayawada
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Warangal
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Noida
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Mumbai
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
6.0 - 11.0 years
5 - 9 Lacs
Pune
Work from Office
You will be part of the Storage Development Business of the Infrastructure organization with the following key responsibilities: Responsibilities: You will handle the most highly escalated cases by our support/L2 teams to ensure they receive top-level help to provide better customer experience on their most impactful issues. You will be responsible for providing help to L2 support engineers. You will be part of Ceph development teams where you must fix customer-reported issues to ensure our customers and partners receive an enterprise-class product. You will work to exceed customer expectations by providing outstanding sustaining service and ensuring that regular updates are provided to L2 teams. You will need to understand our customer's and partner's needs and work with product management teams on driving these features and fixes directly into the product. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 6+ Years of experience working as a L3, sustaining or development engineer or directly related experience. Senior-level Linux Storage System administration experience, including system installation, configuration, maintenance, scripting via BASH, and utilizing Linux tooling for advanced log debugging. Advanced troubleshooting and debugging skills, with a passion for problem-solving and investigation. Must be able to work and collaborate with a global team and strive to share knowledge with peers. 1+ years of working with Ceph/Openshift/Kubernetes technologies. Strong scripting (Python, Bash, etc.) and programming skills (C/C++). Able to send upstream patches to fix customer-reported issues. In-depth knowledge of Ceph storage architecture, components, and deployment. Hands-on experience with configuring and tuning Ceph clusters. Understanding of RADOS, CephFS, and RBD (Rados Block Device). Preferred technical and professional experience Knowledge of open source development, and working experience in open source projects. Certifications related to Ceph storage and performance testing are a plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Knowledge of monitoring tools (Prometheus, Grafana) and logging frameworks. Ability to work effectively in a collaborative, cross-functional team environment. Knowledge of AI/ML, exposure to Gen AI
Posted 2 weeks ago
5.0 - 10.0 years
10 - 14 Lacs
Pune
Work from Office
As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. Your primary responsibilities include: 24x7 Observability: Be part of a worldwide team that monitors the health of production systems and services around the clock, ensuring continuous reliability and optimal customer experience. Cross-Functional Troubleshooting: Collaborate with engineering teams to provide initial assessments and possible workarounds for production issues. Troubleshoot and resolve production issues effectively. Deployment and Configuration: Leverage Continuous Delivery (CI/CD) tools to deploy services and configuration changes at enterprise scale. Security and Compliance Implementation: Implementing security measures that meet or exceed industry standards for regulations such as GDPR, SOC2, ISO 27001, PCI, HIPAA, and FBA. Maintenance and Support: Tasks related to applying Couchbase security patches and upgrades, supporting Cassandra and Mongo for pager duty rotation, and collaborating with Couchbase Product support for issue resolution. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Bachelor’s degree in Computer Science, IT, or equivalent. 5+ years of experience in any database either Netezza, Db2 or MSSQL etc. 5+ years of experience in DevOps, CloudOps, or SRE roles. Foundational experience with Linux/Unix systems. Hands-on exposure to cloud platforms (IKS, AWS, or Azure). Understanding of networking and databases. Strong troubleshooting and problem-solving skills. Preferred technical and professional experience Databases :Strongly preferred experience in working with Netezza/Db2 databases Adminstration. Monitor and optimize DB performance and reliability. Configure and troubleshoot database issues Kubernetes/OpenShift: Strongly preferred experience in working with production Kubernetes/OpenShift environments. Automation/Scripting: In depth experience with the Ansible, Python, Terraform, and CI/CD tools such as Jenkins, IBM Continuous Delivery, ArgoCD Monitoring/Observability: Hands on experience crafting alerts and dashboards using tools such as Instana, New Relic, Grafana/Prometheus
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
4.0 - 6.0 years
3 - 7 Lacs
Bengaluru
Work from Office
As a Hybrid Cloud Support Engineer, you will utilize your passion for helping others to ensure that our Users and Enterprises are successful in their use of DataStax products and solutions. This is a continuous learning and teaching role where you will develop and share your knowledge of troubleshooting, configuration, and exciting new technologies inclusive of and complementary to Apache Cassandra, DataStax Enterprise, and Astra. What you will do: Research, reproduce, troubleshoot, and solve highly challenging technical issues Provide thoughtful direction and support for technical inquiries Ensure that customer issues are resolved as expediently as possible Diagnose and reproduce customer reported issues and log JIRA tickets Participate in on-call rotation for after-hours, holiday, weekend support coverage Create code samples, tutorials, and articles for the DataStax Knowledge Base Collaborate and contribute to Support Team infrastructure tools and processes Fulfill the on-call rotation requirements of this role Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-6 years of experience supporting large enterprise customers in a customer-facing support role Experience with supporting a Software as a Service Cloud product Experience with Grafana, Prometheus, Splunk, Datadog and other monitoring solutions Experience supporting Kubernetes-based distributed applications, or an understanding of Kubernetes fundamentals Experience with pub-sub, messaging and streaming solutions like Pulsar, Kafka Experience using APIs and understanding app development lifecycle with a language or framework based on Java, Python or Go would be preferred Experience/certifications with AWS/GCP/Azure deployments and associated cloud based monitoring tools would be preferred Experience with Linux operating systems, including command line, performance, and network troubleshooting Excellent verbal and written communication skills Lifetime learner, self-motivated with ability to multi-task during high pressure situations Preferred technical and professional experience Supporting Apache Cassandra environments or other relational and/or alternative database technologies is a plus Understanding of Java, Python, Go and/or another language (Troubleshooting skills) Experience with escalation management and customer success or premium support Experience working in a fast-moving high-pressure environment
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
vadodara, gujarat
On-site
As a Senior Software Engineer (Java Developer, Spring Boot) at our organization, you will be responsible for designing, developing, and deploying high-performance, scalable Java-based microservices. You will have the opportunity to work on cutting-edge technologies and contribute to the development of cloud-native applications. Your key skills should include a strong foundation in Core Java (8/11/17), object-oriented programming, multi-threading, and exception handling. Additionally, experience with Spring Boot, REST APIs, Spring Data JPA, Spring Security, and Spring Cloud is essential for this role. You will be expected to follow API-first and Cloud-native design principles while implementing and maintaining REST APIs following OpenAPI/Swagger standards. Analyzing code review reports, ensuring adherence to clean code principles, and driving the adoption of automated testing practices will be crucial aspects of your responsibilities. Collaboration and mentoring are also essential parts of this role. You will work closely with DevOps, Product Owners, and QA teams to deliver features effectively. Mentoring junior developers, conducting code walkthroughs, and leading design discussions are key responsibilities that contribute to the growth of the team. Preferred experience for this role includes 5+ years of hands-on Java development experience, a deep understanding of Microservices design patterns, exposure to Cloud deployment models, proficiency with Git, Jenkins, SonarQube, and containerization, as well as experience working in Agile/Scrum teams. In addition to technical skills, behavioral traits such as an ownership-driven mindset, strong communication skills, and the ability to dive deep into technical problems to deliver solutions under tight deadlines are highly valued in our organization. If you are passionate about building scalable backend systems, possess the required technical skills, and thrive in a collaborative work environment, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking, you will have the opportunity to impact your career and embark on an adventure where you can push the limits of what's possible. You will play a crucial role in an agile team dedicated to developing, improving, and providing reliable, cutting-edge technology solutions that are secure, stable, and scalable. Your responsibilities will involve executing creative software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will be tasked with developing secure, high-quality production code, reviewing and debugging code written by others, and identifying opportunities to eliminate or automate the remediation of recurring issues to enhance the overall operational stability of software applications and systems. Additionally, you will lead evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture. You will also lead communities of practice across Software Engineering to promote the awareness and utilization of new and leading-edge technologies while fostering a team culture of diversity, opportunity, inclusion, and respect. The ideal candidate for this role should possess formal training or certification on software engineering concepts and at least 5 years of applied experience. You should have experience with Java, Microservices, Spring Boot, and Kafka, along with familiarity with Microservices architecture and related technologies such as Docker, Kubernetes, and API Gateway. Experience with distributed tracing and monitoring tools such as Prometheus and Grafana is desired. Hands-on practical experience in delivering system design, application development, testing, and operational stability is essential, and proficiency in automation and continuous delivery methods is a must. You should be proficient in all aspects of the Software Development Life Cycle and have an advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) is required, along with practical cloud-native experience. Preferred qualifications include experience with AI technologies, including machine learning frameworks like TensorFlow or PyTorch, as well as in-depth knowledge of the financial services industry and their IT systems. Additionally, having proficiency in Python and AI would be advantageous for this role.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Software DevOps Engineer, you will lead the design, implementation, and evolution of telemetry pipelines and DevOps automation that enable next-generation observability for distributed systems. You will blend a deep understanding of Open Telemetry architecture with strong DevOps practices to build a reliable, high-performance, and self-service observability platform across hybrid cloud environments (AWS & Azure). Your mission is to empower engineering teams with actionable insights through rich metrics, logs, and traces, while championing automation and innovation at every layer. You will be responsible for: Observability Strategy & Implementation: Architect and manage scalable observability solutions using OpenTelemetry (OTel), encompassing Collectors, Instrumentation, Export Pipelines, Processors & Extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability: Own the CI/CD experience using GitLab Pipelines, integrating infrastructure automation with Terraform, Docker, and scripting in Bash and Python. Build resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems. Cloud-Native Enablement: Develop observability blueprints for cloud-native apps across AWS (ECS, EC2, VPC, IAM, CloudWatch) and Azure (AKS, App Services, Monitor). Optimize cost and performance of telemetry pipelines while ensuring SLA/SLO adherence for observability services. Monitoring, Dashboards, and Alerting: Build and maintain intuitive, role-based dashboards in Grafana, New Relic, enabling real-time visibility into service health, business KPIs, and SLOs. Implement alerting best practices integrated with incident management systems. Innovation & Technical Leadership: Drive cross-team observability initiatives that reduce MTTR and elevate engineering velocity. Champion innovation projects including self-service observability onboarding, log/metric reduction strategies, AI-assisted root cause detection, and more. Mentor engineering teams on instrumentation, telemetry standards, and operational excellence. Requirements: - 6+ years of experience in DevOps, Site Reliability Engineering, or Observability roles - Deep expertise with OpenTelemetry, including Collector configurations, receivers/exporters (OTLP, HTTP, Prometheus, Loki), and semantic conventions - Proficient in GitLab CI/CD, Terraform, Docker, and scripting (Python, Bash, Go). Strong hands-on experience with AWS and Azure services, cloud automation, and cost optimization - Proficiency with observability backends: Grafana, New Relic, Prometheus, Loki, or equivalent APM/log platforms - Passion for building automated, resilient, and scalable telemetry pipelines - Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have: - Certifications in AWS, Azure, or Terraform - Experience with OpenTelemetry SDKs in Go, Java, or Node.js - Familiarity with SLO management, error budgets, and observability-as-code approaches - Exposure to event streaming (Kafka, RabbitMQ), Elasticsearch, Vault, Consul,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a skilled UI Engineer with 2 years of experience, specializing in React.js. As a part of our front-end team in Indore, you will be instrumental in developing user-friendly web applications that offer exceptional user experiences. Your primary responsibilities will include developing and maintaining scalable front-end components using React.js. You will collaborate closely with UX/UI designers and backend developers to implement and enhance user interfaces. It will be your responsibility to optimize applications for speed and scalability, ensuring cross-browser compatibility and responsiveness across various devices. Writing clean, well-documented code, participating in code reviews, unit testing, and staying abreast of the latest front-end development trends are also crucial aspects of your role. To excel in this position, you should hold a Bachelor's degree in computer science, engineering, or a related field, along with at least 2 years of professional experience in front-end development. Proficiency in TypeScript/JavaScript, React.js, HTML5, and CSS3 is essential. Experience with Grafana, Prometheus, state management libraries, RESTful APIs, asynchronous request handling, version control systems like Git, and build tools like Webpack, Babel, Rollup, or Vite will be advantageous. Familiarity with testing frameworks such as Jest, React Testing Library, time-series databases, and strong problem-solving skills are desired. Effective communication and collaboration skills are also necessary for this role.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
chandigarh
On-site
You will be a part of our team as a Junior DevOps Engineer, where you will contribute to building, maintaining, and optimizing our cloud-native infrastructure. Your role will involve collaborating with senior DevOps engineers and development teams to automate deployments, monitor systems, and ensure the high availability, scalability, and security of our applications. Your key responsibilities will include managing and optimizing Kubernetes (EKS) clusters, Docker containers, and Helm charts for deployments. You will support CI/CD pipelines using tools like Jenkins, Bitbucket, and GitHub Actions, and help deploy and manage applications using ArgoCD for GitOps workflows. Monitoring and troubleshooting infrastructure will be an essential part of your role, utilizing tools such as Grafana, Prometheus, Loki, and OpenTelemetry. Working with various AWS services like EKS, ECR, ALB, EC2, VPC, S3, and CloudFront will also be a crucial aspect to ensure reliable cloud infrastructure. Automating infrastructure provisioning using IaC tools like Terraform and Ansible will be another key responsibility. Additionally, you will assist in maintaining Docker image registries and collaborate with developers to enhance observability, logging, and alerting while adhering to security best practices for cloud and containerized environments. To excel in this role, you should have a basic understanding of Kubernetes, Docker, and Helm, along with familiarity with AWS cloud services like EKS, EC2, S3, VPC, and ALB. Exposure to CI/CD tools such as Jenkins, GitHub/Bitbucket pipelines, basic scripting skills (Bash, Python, or Groovy), and knowledge of observability tools like Prometheus, Grafana, and Loki will be beneficial. Understanding GitOps (ArgoCD) and infrastructure as code (IaC), experience with Terraform/CloudFormation, and knowledge of Linux administration and networking are also required skills. This is a full-time position that requires you to work in person. If you are interested in this opportunity, please feel free to reach out to us at +91 6284554276.,
Posted 2 weeks ago
11.0 - 15.0 years
0 Lacs
karnataka
On-site
As an AI Research Scientist, your role will involve developing the overarching technical vision for AI systems that cater to both current and future business needs. You will be responsible for architecting end-to-end AI applications, ensuring seamless integration with legacy systems, enterprise data platforms, and microservices. Collaborating closely with business analysts and domain experts, you will translate business objectives into technical requirements and AI-driven solutions. Working in partnership with product management, you will design agile project roadmaps that align technical strategy with market needs. Additionally, you will coordinate with data engineering teams to guarantee smooth data flows, quality, and governance across various data sources. Your responsibilities will also include leading the design of reference architectures, roadmaps, and best practices for AI applications. You will evaluate emerging technologies and methodologies, recommending innovations that can be integrated into the organizational strategy. Identifying and defining system components such as data ingestion pipelines, model training environments, CI/CD frameworks, and monitoring systems will be crucial aspects of your role. Leveraging containerization (Docker, Kubernetes) and cloud services, you will streamline the deployment and scaling of AI systems. Implementing robust versioning, rollback, and monitoring mechanisms to ensure system stability, reliability, and performance will also be part of your duties. Project management will be a key component of your role, overseeing the planning, execution, and delivery of AI and ML applications within budget and timeline constraints. You will be responsible for the entire lifecycle of AI application development, from conceptualization and design to development, testing, deployment, and post-production optimization. Enforcing security best practices throughout each phase of development, with a focus on data privacy, user security, and risk mitigation, will be essential. Furthermore, providing mentorship to engineering teams and fostering a culture of continuous learning will play a significant role in your responsibilities. In terms of mandatory technical and functional skills, you should possess a strong background in working with or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, along with robust knowledge of machine learning libraries such as TensorFlow, PyTorch, and Keras, is required. You should also have proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker), orchestration frameworks (Kubernetes), and related DevOps tools like Jenkins and GitLab CI/CD are necessary. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments is essential. Additionally, proficiency in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) and expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture are vital for this role. Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) is also required to support robust inter-system communications. Preferred technical and functional skills include experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI's API integrations, and other domain-specific tools is advantageous. Experience in large-scale deployment of ML projects, along with a good understanding of DevOps/MLOps/LLM Ops and training and fine-tuning of Large Language Models (SLMs) like PALM2, GPT4, LLAMA, etc., is beneficial. Key behavioral attributes for this role include the ability to mentor junior developers, take ownership of project deliverables, contribute to risk mitigation, and understand business objectives and functions to support data needs. If you have a Bachelor's or Master's degree in Computer Science, certifications in cloud technologies (AWS, Azure, GCP), and TOGAF certification (good to have), along with 11 to 14 years of relevant work experience, this role might be the perfect fit for you.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
andhra pradesh
On-site
You are a talented Full Stack Developer with a solid background in Laravel, AWS, and DevOps. Your role involves designing, developing, deploying, and maintaining cutting-edge web applications with a focus on performance, scalability, and reliability. You will work on Laravel development, AWS management, and DevOps tasks to ensure seamless CI/CD operations. In Laravel Development, you will design, develop, and maintain web applications using Laravel, optimize applications for speed and scalability, integrate back-end services, troubleshoot and debug existing applications, and collaborate with front-end developers for seamless integration. For AWS Management, you will manage and deploy web applications on AWS infrastructure, utilize various AWS services, implement backup, recovery, and security policies, optimize services for cost and performance, and have experience with Infrastructure as Code tools like AWS CloudFormation or Terraform. In DevOps, your responsibilities include designing and implementing CI/CD pipelines, maintaining infrastructure automation, monitoring server and application performance, developing configuration management tools, implementing logging and monitoring solutions, and collaborating with development and QA teams for code deployments and releases. Requirements for this role include a Bachelor's degree in Computer Science or related field, 4+ years of Laravel experience, 2+ years of AWS experience, 4+ years of DevOps experience, proficiency in version control systems, strong knowledge of database systems, experience with containerization tools, familiarity with agile methodologies, problem-solving skills, detail orientation, and ability to work in a fast-paced environment. Preferred qualifications include AWS certifications, experience with serverless architecture and microservices, knowledge of front-end technologies, familiarity with monitoring tools, and understanding of security best practices. Soft skills such as communication, collaboration, independence, teamwork, analytical skills, and problem-solving abilities are also essential. This position offers a competitive salary, opportunities to work with the latest technologies, professional development opportunities, health insurance, paid time off, and other benefits in a collaborative and innovative work culture.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Java Microservices Lead with 4+ years of experience and an immediate joiner located in Pune, you will play a crucial role in the end-to-end architecture, development, and deployment of enterprise Java microservices-based applications. Your primary responsibilities will include collaborating with cross-functional teams to architect, design, and develop solutions using core Java, Spring Boot, Spring Cloud, and AWS API Gateway. Moreover, you will lead and mentor a team of developers, participate in the entire software development life cycle, and drive adoption of microservices patterns and API design. Your expertise in Java, Spring Boot, AWS API Gateway, and microservices architecture will be essential in ensuring the delivery of high-quality code following best practices and coding standards. Your hands-on experience with containerization technologies like Docker, orchestration platforms such as Kubernetes, and deployment on cloud services like AWS, Azure, or Google Cloud will be highly valuable. Additionally, your familiarity with relational and NoSQL databases, Agile methodologies, version control systems, and software engineering best practices will contribute to the success of the projects. Furthermore, your strong problem-solving and analytical skills, attention to detail, and ability to work both independently and collaboratively in a fast-paced environment will be key assets in troubleshooting, debugging, and resolving issues across distributed systems. Your excellent communication and interpersonal skills will foster a culture of collaboration, continuous improvement, and technical excellence within the team. Staying up to date with industry trends and introducing innovative solutions to improve application development will be encouraged. In summary, as a Java Microservices Lead, you will be at the forefront of designing and developing scalable, cloud-native solutions, optimizing application performance and scalability, and establishing CI/CD pipelines. Your technical skills in Java, Spring Boot, microservices architecture, cloud platforms, databases, CI/CD, DevOps, and monitoring tools will be crucial in ensuring the success of the projects and the team.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
Qualcomm India Private Limited is looking for a highly skilled and experienced MLOps Engineer to join their team and contribute to the development and maintenance of their ML platform both on premises and AWS Cloud. As a MLOps Engineer, your responsibility will include architecting, deploying, and optimizing the ML & Data platform supporting the training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial for ensuring the smooth operation and scalability of the ML infrastructure. You will collaborate with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of the ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Your responsibilities will include architecting, developing, and maintaining the ML platform, designing and implementing scalable infrastructure solutions for NVIDIA clusters on premises and AWS Cloud, collaborating with data scientists and software engineers to define requirements, optimizing platform performance and scalability, monitoring system performance, implementing CI/CD pipelines, maintaining monitoring stack using Prometheus and Grafana, managing AWS services, implementing logging and monitoring solutions, staying updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proposing enhancements to the ML platform. Qualcomm is looking for candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, proven experience as an MLOps Engineer or similar role with a focus on large-scale ML and/or Data infrastructure and GPU clusters, strong expertise in configuring and optimizing NVIDIA DGX clusters, proficiency in using the Kubernetes platform and related technologies, solid programming skills in languages like Python, Go, experience with relevant ML frameworks, in-depth understanding of distributed computing and GPU acceleration techniques, familiarity with containerization technologies and orchestration tools, experience with CI/CD pipelines and automation tools for ML workflows, experience with AWS services and monitoring tools, strong problem-solving skills, excellent communication, and collaboration skills. Qualcomm is an equal opportunity employer and is committed to providing reasonable accommodations to support individuals with disabilities during the hiring process. If you are interested in this role or require more information, please contact Qualcomm Careers.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About The Role And Key Responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. ,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 5+ years of backend development experience, with a strong focus on Kotlin - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About the role and key responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. - Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. - Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. - Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. - Help shape the culture and methodology of a rapidly growing company.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are looking for an experienced DevOps Architect to spearhead the design, implementation, and management of scalable, secure, and highly available infrastructure. As the ideal candidate, you should possess in-depth expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across various cloud environments. This role requires strong leadership skills and the ability to mentor team members effectively. Your responsibilities will include leading and overseeing the DevOps team to ensure the reliability of infrastructure and automated deployment processes. You will be tasked with designing, implementing, and maintaining highly available, scalable, and secure cloud infrastructure on platforms such as AWS, Azure, and GCP. Developing and optimizing CI/CD pipelines for multiple applications and environments will be a key focus, along with driving Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Monitoring, logging, and alerting solutions will fall under your purview to ensure system health and performance. Collaboration with Development, QA, and Security teams to integrate DevOps best practices throughout the SDLC is essential. You will also lead incident management and root cause analysis for production issues, ensuring robust security practices for infrastructure and pipelines. Guiding and mentoring team members to foster a culture of continuous improvement and technical excellence will be crucial. Additionally, evaluating and recommending new tools, technologies, and processes to enhance operational efficiency will be part of your role. Qualifications: - Bachelor's degree in Computer Science, IT, or a related field; Master's degree preferred. - At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA). - 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. - 5+ years of experience in a technical leadership or team lead role. Skills & Abilities: - Expertise in at least two major cloud platforms: AWS, Azure, or GCP. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. - Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Proficiency in containerization and orchestration using Docker and Kubernetes. - Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). - Scripting knowledge in languages like Python, Bash, or Go. - Solid understanding of networking, security, and system administration. - Experience in implementing security best practices across DevOps pipelines. - Proven ability to mentor, coach, and lead technical teams. Conditions: Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required. Values: Our values at AOT guide how we work, collaborate, and grow as a team. Every role is expected to embody and promote values such as innovation, integrity, ownership, agility, collaboration, and empowerment.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You have 5+ years of overall experience in Cloud Operations, with a minimum of 5 years of hands-on experience with Google Cloud Platform (GCP) and at least 3 years of experience in Kubernetes administration. It is mandatory to have GCP Certified Professional certification. In this role, you will be responsible for managing and monitoring GCP infrastructure resources to ensure optimal performance, availability, and security. You will also administer Kubernetes clusters, handle deployment, scaling, upgrades, patching, and troubleshooting. Automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar will be implemented and maintained by you. Your key responsibilities will include responding to incidents, performing root cause analysis, and resolving issues within SLAs. Configuring logging, monitoring, and alerting solutions across GCP and Kubernetes environments will also be part of your duties. Supporting CI/CD pipelines, integrating Kubernetes deployments with DevOps processes, and maintaining detailed documentation of processes, configurations, and runbooks are critical aspects of this role. Collaboration with Development, Security, and Architecture teams to ensure compliance and best practices is essential. You will participate in an on-call rotation and respond promptly to critical alerts. The required skills and qualifications for this position include being a GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent) with a strong working knowledge of GCP services such as Compute Engine, GKE, Cloud Storage, IAM, VPC, and Cloud Monitoring. Solid experience in Kubernetes cluster administration, proficiency with Infrastructure as Code tools like Terraform, knowledge of containerization concepts and tools like Docker, experience in monitoring and observability with tools like Prometheus, Grafana, and Stackdriver, familiarity with incident management and ITIL processes, ability to work in 24x7 operations with rotating shifts, and strong troubleshooting and problem-solving skills. Preferred skills that would be nice to have for this role include experience supporting multi-cloud environments, scripting skills in Python, Bash, Go, exposure to other cloud platforms like AWS and Azure, and familiarity with security controls and compliance frameworks.,
Posted 2 weeks ago
3.0 - 8.0 years
10 - 16 Lacs
Bengaluru
Work from Office
Develop and manage job plans, schedules, and work packages for routine maintenance activities in LNG and refining assets. Coordinate with operations, engineering, and reliability teams to ensure safe and timely execution Required Candidate profile Engineering professionals with 3–7 years of experience in maintenance planning/scheduling in oil & gas. Proficient in CMMS tools (JDE), Primavera P6, safety compliance, and resource coordination.
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
andhra pradesh
On-site
The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough