Jobs
Interviews

35553 Kubernetes Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We’re Looking for AWS & DevOps Trainers! (Pune | Offline) Are you passionate about shaping the next generation of developers? Join us as a AWS & DevOps Trainer and inspire minds with your expertise! What We’re Looking For: Experience: 3+ years (with at least 1 year of teaching) Mode: Offline | Mon-Sat Timings: 8:00am to 4:00pm Skills to Teach: Docker, Kubernetes, Jenkins, Terraform, Ansible, Git and CI/CD pipelines and cloudWatch etc 📍 Location: Pune ⏳ Full-time If you are interested please reach out to us directly or share your profile- aditi.garodia@theodysvadhyay.co.in or Surabhi.barat@theodysvadhyay.co.in

Posted 1 day ago

Apply

0.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Opening for Team Lead - Generative AI / AI-ML Specialist Role Overview: We’re seeking an experienced Data Scientist / Team Lead with deep expertise in Generative AI (GenAI) to design and implement cutting-edge AI models that solve real-world business problems. You’ll work with LLMs, GANs, RAG frameworks, and transformer-based architectures to create production-ready solutions across domains. Key Responsibilities: Design, develop, and fine-tune Generative AI models (LLMs, GANs, Diffusion models, etc.) Work on RAG (Retrieval-Augmented Generation) and transformer-based architectures for contextual responses and document intelligence Customize and fine-tune Large Language Models (LLMs) for domain-specific applications Build and maintain robust ML pipelines and infrastructure for training, evaluation, and deployment Collaborate with engineering teams to integrate models into end-user applications Stay current with the latest GenAI research, open-source tools, and frameworks Analyze model outputs, evaluate performance, and ensure ethical AI practices. Required Skills: Strong proficiency in Python and ML/DL libraries: TensorFlow, PyTorch, HuggingFace Transformers Deep understanding of LLMs, RAG, GANs, Autoencoders, and other GenAI architectures Experience with fine-tuning models using LoRA, PEFT, or similar techniques Familiarity with Vector Databases (e.g., FAISS, Pinecone) and embedding generation Experience working with datasets, data preprocessing, and synthetic data generation Good knowledge of NLP, prompt engineering, and language model safety Experience with APIs, model deployment, and cloud platforms (AWS/GCP/Azure) Nice to Have: Prior work with Chatbots, Conversational AI, or AI Assistants Familiarity with LangChain, LLMOps, or Serverless Model Deployment Background in MLOps, containerization (Docker/Kubernetes), and CI/CD pipelines Knowledge of OpenAI, Anthropic, Google Gemini, or Meta LLaMA models What We Offer: An opportunity to work on real-world GenAI products and POCs Collaborative environment with constant learning and innovation Competitive salary and growth opportunities 5-day work week with a focus on work-life balance Work from office Job Types: Full-time, Permanent Pay: Up to ₹1,600,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 1 day ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Bangalore City, Bengaluru, Karnataka

On-site

Job Title: . Net Developer Location: Bangalore, Karnataka Experience: 3-5 years of experience in software development using Angular 14+, Dot Net Core, C#, and SQL Server, Dockers , Kubernetes , Microservices, Event-driven architecture. Compensation: 3,00,000 - 5,00,000 Roles and Responsibilities: Design and Development: Design, develop, and deploy high-quality software applications using Angular, Dot Net Core, C#, and SQL Server. Collaborate with cross-functional teams to identify requirements and develop solutions. Participate in code reviews and provide feedback to improve code quality. Technical Leadership: Lead the development of software applications and provide technical guidance to junior developers.improve performance.introduce new bugs.regulations.with team members.performance. Mentor junior developers to improve their skills andknowled Collaborate with other teams to ensure alignment with company goals and objectives. Development Releases: Develop and maintain CI/CD pipelines to automate development releases. Collaborate with the QA team to ensure that software applications meet quality standards. Participate in release management and ensure that software applications are deployed smoothly. Performance Optimizations: Identify performance bottlenecks in software applications and develop solutions to Optimize database queries and improve data retrieval efficiency. Collaborate with the QA team to ensure that performance optimizations do not introduce new bugs. Information Security: Ensure that software applications comply with information security guidelines and regulation. Collaborate with the security team to identify and mitigate security risks. Participate in security audits and provide feedback to improve security posture. GIT and Version Control: Use GIT and other version control systems to manage code changes and collaborate with team members. Participate in code reviews and provide feedback to improve code quality. Caching Framework: Knowledge of caching frameworks like Redis is a plus. Collaborate with the team to implement caching solutions to improve application performance. Collaboration with L2 Support Team: Collaborate with the L2 Support Team to educate them on new patches to be released. Troubleshoot any production issues and provide technical guidance to the L2 Support Team. Team Management: Manage a team of junior developers and provide technical guidance and mentorship. Collaborate with other teams to ensure alignment with company goals and objectives. Participate in team meetings and provide feedback to improve team performance. Key Skills Technical Skills: 3-5 years of experience in software development using Angular 14+, Dot Net Core, C#, and SQL Server. Excellent knowledge of GIT and version control systems. Strong understanding of performance optimizations and information security guidelines. Knowledge of caching frameworks like Redis is a plus. Leadership Skills: Experience in managing a team of junior developers. Strong communication and interpersonal skills. Ability to mentor and guide junior developers. Soft Skills: Strong problem-solving skills. Ability to work in a fast-paced environment. Collaborative and team-oriented approach. Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Experience: Dockers : 3 years (Required) Kubernetes: 3 years (Required) Microservices: 3 years (Required) • Event-driven architecture : 3 years (Required) Angular 14+: 3 years (Required) SQL ,C# , .Net developer: 3 years (Required) Location: Bangalore City, Bengaluru, Karnataka (Required)

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description KPIT Technologies is a global partner in the automotive and mobility ecosystem, dedicated to making software-defined vehicles a reality. As an independent software development and integration partner, KPIT is at the forefront of driving the transition towards a cleaner, smarter, and safer future. With over 11,000 experts globally, specializing in embedded software, AI, and digital solutions, KPIT accelerates the implementation of next-generation technologies. The company operates engineering centers in Europe, the USA, Japan, China, Thailand, and India, collaborating with industry leaders to drive innovation in mobility. Kindly view the below mentioned job roles for CTO. Only eligible candidates should apply for the roles. Do not apply if already done. ***Please share DOB and PAN for applying jobs in India, as per company policy. JOB OPENINGS:- CTO_Full Stack Dev and Lead - CTO_Full Stack Dev and Lead:- Sr. Team Lead(3 to 12 yrs Exp.) Job Requirement Details MANDATORY SKILLS reactjs,nodejs,Javascript PREFERRED SKILLS Microservices JOB DESCRIPTION Experience working in application design and development (ReactJS / Angular, Java / NodeJS, Python, Database) Mandatory Skills JavaScript, jQuery ReactJS 16 and above, NodeJS / Java, SQL, Micro services, HTML 5, CSS3 & sass, Bootstrap 4, Docker Guide and Mentor developers. Resolve technical challenges. Perform code reviews. Requirement gathering, convert requirements into technical tasks Architecting and building customer-facing web applications Design, build and maintain efficient and reusable UI components Design and implement micro service-based application Database design. Hands on experience of working with SQL and NoSQL databases Docker based deployment of application Comfortable working on both the front-end and back-end of web applications. Familiarity developing within MVC frameworks. Strong interpersonal, communication, and teamwork skills. CTO_Python Dev and Lead - CTO_Python Dev and Lead:- Sr. Team Lead(3 to 13 yrs Exp.) Job Requirement Details MANDATORY SKILLS Python ,REST API,SQL,RabbitMQ,CI/CD JOB DESCRIPTION We are seeking an experienced Python Developer who will oversee a team of skilled engineers while being actively involved in the development of cutting-edge AI solutions. The role requires a blend of technical expertise, leadership ability, and hands-on development skills. The Tech Lead will guide the team through complex projects using Agile, ensuring that all solutions are scalable, maintainable, and of the highest quality. Through mentorship, effective communication, and a passion for innovation, the Tech Lead will drive the team to achieve its full potential and deliver outstanding results. Required skills · Python backend development (3+ years) · REST API development expertise · SQL database proficiency and optimization · Knowledge of design patterns for different scenarios (e.g., pub/sub for async processing, microservices for scalability) · Experience with message brokers (RabbitMQ) and task queues (Celery) · Strong analytical thinking and problem-solving abilities · Experience with containerization and CI/CD workflows Responsibilities · Design and develop backend services for internal tooling platforms · Create scalable APIs and optimize database performance · Select and implement appropriate architectures based on use case requirements · Integrate applications with GenAI services (Azure OpenAI, AWS Bedrock) · Build enterprise-grade solutions for various business applications CTO_AI Application Architect - CTO_AI Application Architect:-Architect(13 to 17 yrs Exp.)  Job Requirement Details MANDATORY SKILLS Artificial Intelligence,Machine Learning JOB DESCRIPTIONAI Application Architect Job Summary: We are seeking an experienced AI Application Architect to lead the design and development of artificial intelligence (AI) and machine learning (ML) applications across the organization. The successful candidate will have a strong background in software architecture, AI/ML, and cloud computing, with a proven track record of delivering scalable and secure AI applications. The AI Application Architect will work closely with cross-functional teams, including data science, engineering, and product management, to design and implement AI solutions that drive business value. Key Responsibilities: Architecture and Design: Develop and maintain a deep understanding of the organization's AI strategy and architecture, and design scalable and secure AI applications that meet business requirements. AI/ML Solution Development: Collaborate with data scientists and engineers to develop and deploy AI/ML models and applications, using technologies such as deep learning, natural language processing, and computer vision. Cloud Computing: Design and implement AI applications on cloud platforms, such as AWS, Azure, or Google Cloud, and ensure scalability, security, and compliance. Technology Evaluation: Evaluate and recommend new AI/ML technologies, tools, and frameworks, and develop proof-of-concepts to demonstrate their value. Collaboration and Leadership: Work closely with cross-functional teams, including data science, engineering, and product management, to ensure that AI applications meet business requirements and are delivered on time. Security and Compliance: Ensure that AI applications are designed and implemented with security and compliance in mind, and that all relevant regulations and standards are met. Monitoring and Optimization: Monitor AI application performance, identify areas for optimization, and develop strategies to improve efficiency and effectiveness. Requirements: Technical Skills: Programming languages: Python, Java, C++, etc. AI/ML frameworks: TensorFlow, PyTorch, Scikit-learn, etc. Cloud platforms: AWS, Azure, Google Cloud, etc. Containerization: Docker, Kubernetes, etc. Security and compliance: GDPR, HIPAA, etc. Soft Skills: Strong communication and collaboration skills. Ability to work in a fast-paced environment and meet deadlines. Strong problem-solving and analytical skills.

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Join our Team About this opportunity: Join Ericsson as an Oracle Database Administrator and play a key role in managing and optimizing our critical database infrastructure. As an Oracle DBA, you will be responsible for installing, configuring, Upgrading and maintaining Oracle databases, ensuring high availability, performance, and security. You’ll work closely with cross-functional teams to support business-critical applications, troubleshoot issues, and implement database upgrades and patches. This role offers a dynamic and collaborative environment where you can leverage your expertise to drive automation, improve efficiency, and contribute to innovative database solutions. What you will do: Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Experience with Container Databases (CDBs) and Pluggable Databases (PDBs) for better resource utilization and simplified management. High availability configuration using Oracle Dataguard, PostgreSQL, MySQL replication, and/or MariaDB Galera clusters. Oracle Enterprise Manager administration which includes alarm integration. Familiarity with Linux tooling such as iotop, vmstat, nmap, OpenSSL, grep, ping, find, df, ssh, and dnf. Familiarity with Oracle SQL Developer, Oracle Data Modeler, pgadmin, toad, PHP, MyAdmin, and MySQL Workbench is a plus. Familiarity with NoSQL, such as MongoDB is a plus. Knowledge of Middle ware like Golden-gate both oracle to oracle and oracle to BigData. Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Conduct detailed performance analysis and fine-tuning of SQL queries and stored procedures. Analyze AWR, ADDMreports to identify and resolve performance bottlenecks. Implement and manage backup strategies using RMAN and other industry-standard tools. Performing pre-patch validation using opatch and datapatch. Testing patches in a non-production environment to identify potential issues before applying to production. Apply Oracle quarterly patches and security updates. Implement and manage backup strategies using RMAN and other industry-standard tools. The skills you bring: Bachelor of Engineering or equivalent experience with at least 2 to 3 years in the field of IT. Must have experience in handling operations in any customer service delivery organization. Thorough understanding of basic framework of Telecom / IT processes. Willingness to work in a 24x7 operational environment with rotating shifts, including weekends and holidays, to support critical infra and ensure minimal downtime. Strong understanding of Linux systems and networking fundamentals. Knowledge of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a plus. Oracle Certified Professional (OCP) is preferred Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770689

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Role: We’re looking for a DevOps Lead to architect, implement, and manage scalable cloud infrastructure, robust CI/CD pipelines, and secure deployment systems. You’ll lead the DevOps function, working closely with engineering and AI teams to ensure high availability, performance, and reliability across all AI-driven products. Key Responsibilities: Lead and mentor the DevOps team, establishing best practices and operational standards Architect and optimize CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Design and manage Kubernetes-based container orchestration and Docker deployments Build and maintain cloud infrastructure on AWS, GCP, or Azure with cost and performance optimization Automate infrastructure provisioning and configuration with Terraform and Ansible Set up and manage observability stacks using Prometheus, Grafana, ELK, etc. Ensure uptime, scalability, and performance of critical services and environments Administer web servers (Nginx, Apache) and enforce server and network-level security Implement DevSecOps practices including secrets management, access controls, and compliance checks Lead incident response and root cause analysis for infrastructure-related issues Requirements: 4+ years of DevOps or SRE experience, with at least 1–2 years in a lead or architectural role Deep knowledge of CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes) Strong experience with cloud platforms like AWS, GCP, or Azure Proficient in Linux server administration and infrastructure automation tools (Terraform, Ansible) Solid understanding of Git-based workflows (GitHub, GitLab, Bitbucket) Experience with monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, etc.) Strong problem-solving skills and ability to work cross-functionally in a fast-paced environment Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, or equivalent practical experience Certifications like AWS Certified DevOps Engineer, CKA/CKAD, or relevant cloud/k8s credentials Experience in AI/ML infrastructure, GPU cluster setup, or MLOps tooling (optional but a big plus)

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Backend (Node.js) Engineer SE II Gurugram, Haryana – Work from office /Full-Time OLX India is a leading classifieds platform, boasting over 100+ million app downloads across diverse categories, including cars, bikes, real estate, and electronics. OLX aims to redefine the online classified landscape for its users by extending the lifespan of products through its platform, helping them maximize the value of their belongings and promoting responsible consumption, contributing to a greener, more sustainable future. OLX is now part of the CarTrade Tech group, which is India's largest online auto platform. We are seeking a highly skilled Backend Developer to join our team, contributing to the development and implementation of innovative software solutions. What You will do : ● You will write high-quality code using object-oriented principles with 100% unit and integration tests. ●Design, develop, and maintain backend services and APIs using Node.js. ● You will translate high-level business problems into scalable design and code. ● Creating libraries for larger usage, defining APIs and own end-to-end services. ● Mentor junior engineers while optimizing application architectures for Docker/Kubernetes/AWS and using microservices best practices (logging, metrics, and database). ● You will ensure platform and application readiness concerning capacity, performance, monitoring and stability. ● You will influence product requirements and operational plans. ● Instill best practices for development and promote their adoption, while working with product manager to estimate and plan projects in the Agile development framework. ● You will report to Engineering manager Who we are looking for : ● Have expertise in Object-oriented design principles using any modern language preferably Node.js ●Strong proficiency in JavaScript/TypeScript and Node.js. ●Experience with frameworks like Express.js, Nest.js, or Koa.js . ●Hands-on experience with RESTful APIs, Microservices, and Event-driven architecture . ●Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure) . ● Bachelor of Technology / Engineering in Computer Science or a related technical discipline with 4 plus years of relevant experience. ● Have experience building large-scale web applications in service oriented architecture. ● Knowledge of design patterns, and an ability to design intuitive module and class ¬≠level interfaces with experience in data model designing in SQL/NOSQLs. ● Have knowledge of at least one of the databases like MySQL, PostgreSQL or Oracle and one of the NOSQLs like DynamoDB, MongoDB, Cassandra, and Aerospike. What We'll give you : ● An opportunity to shape a largely unorganised industry and help millions of car buyers and sellers transact with trust and efficiency. ● Passionate team and leadership colleagues who will share the dream and drive to deliver the most trusted, convenient and innovative car buying and selling experiences. ● Opportunities to speed up your learning and development across your role relevant areas ● At OLX, we are committed to creating a diverse, inclusive, and authentic workplace. ● We strongly encourage people of all races, ethnicities, disabilities, ages, gender identities or expressions, sexual orientations, religions, backgrounds, and experiences to apply. ● We embrace diversity and welcome applicants from all backgrounds. If you are as excited as us about this position and our company, we hope you join us! "Our Success is fueled by diverse perspectives and talents"

Posted 1 day ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

Collaborate with data engineers, data scientists, and stakeholders to understand data requirements, problem statements, and system integrations Utilize, apply & enhance GenAI models using state-of-the-art techniques like transformers, GANs, VAEs, and LLM models Implement and optimize GenAI models for performance, scalability, and efficiency Integrate GenAI models, including LLMs, into production pipelines, applications, and existing analytical solutions Develop user-facing interfaces and APIs to interact with GenAI models, including LLMs Utilize prompt engineering techniques to enhance model performance, including LLM modelsTechnical skills requirementsThe candidate must demonstrate proficiency in, Collaborate with data engineers, data scientists, and stakeholders to understand data requirements, problem statements, system integrations, and RAG application functionalities Use, apply, enhance GenAI models using state-of-the- art techniques like transformers, GANs, VAEs, LLMs (including experience with various LLM architectures and capabilities), and vector representations for efficient data processing Implement and optimize GenAI models for performance, scalability, and efficiency, considering factors like chunking strategies for large datasets and efficient memory management Integrate GenAI models, including LLMs, into production pipelines, applications, existing analytical solutions, and RAG workflows, ensuring seamless data flow and information exchange Develop user-facing interfaces and APIs (RESTful or GraphQL) to interact with GenAI models and RAG applications, providing a user-friendly experience Utilize LangChain and similar tools (e.g., PromptChain) to facilitate efficient data retrieval, processing, and prompt engineering for LLM fine-tuning within RAG applications Apply software engineering principles to develop robust, scalable, maintainable, and production-ready GenAI applications Build and deploy GenAI applications on cloud platforms (AWS, Azure, or GCP), leveraging containerization technologies (Docker, Kubernetes) for efficient resource management Integrate GenAI applications with other applications, tools, and analytical solutions (including dashboards and reporting tools) to create a cohesive user experience and workflow within the RAG ecosystem Continuously evaluate and improve GenAI models and applications based on data, feedback, user needs, and RAG application performance metrics Stay up-to-date with the latest advancements in GenAI research, development, software engineering practices, integration tools, LLM architectures, and RAG functionalities Document code, models, processes, and RAG application design for future reference and knowledge sharing

Posted 1 day ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

We are looking for an L3 Support Engineer with deep expertise in Elasticsearch and the ELK stack to provide operational support and drive the stability, performance, and scalability of large-scale cluster deployments. The ideal candidate will play a vital role in diagnosing complex system issues, optimizing performance, and ensuring the reliability of Elasticsearch clusters in production environments. Key Roles and Responsibilities: Provide advanced (L3) technical support for Elasticsearch clusters and associated components (Kibana, Logstash, Beats, metricbeats, etc.). Monitor, troubleshoot, and resolve critical production issues related to cluster performance, indexing latency, node failures, and data ingestion bottlenecks. Design and maintain ingestion pipelines using Logstash, Beats, or custom shippers to ensure real-time data processing and reliability. Optimize Elasticsearch performance by tuning shards, mappings, queries, analyzers, and index templates. Manage and implement Index Lifecycle Management (ILM), data retention policies, and archival strategies to maintain cluster health and storage efficiency. Perform root cause analysis and work with cross-functional teams to resolve systemic issues. Maintain operational documentation and provide knowledge transfer to internal teams. Participate in on-call /onsite support and contribute to issue resolution efforts. Adhere to high-quality work standards Responsible for maintaining Confidentiality, Integrity and Availability of Vehere’s information assets including business critical information. Skills and Experience: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field, or equivalent industry experience. 5–10+ years of hands-on experience managing Elasticsearch clusters in production (version 7.x/8.x preferred). Expert understanding of Elasticsearch internals, including Lucene, inverted indexes, query planning, and data structures. Strong command over query DSL, aggregations, filters, and performance tuning techniques. Experience designing scalable, resilient architecture for ELK/OpenSearch deployments across multi-node clusters. Proficiency in Logstash pipelines, custom grok patterns, and Beats agent configurations. Familiarity with Kibana or equivalent visualization tools for dashboarding and troubleshooting. In-depth knowledge of JVM tuning, garbage collection (GC) strategies, and heap memory optimization. Strong scripting skills in Python, Shell, or Bash for automation, monitoring, and custom ingestion workflows. Exposure to DevOps practices, CI/CD pipelines, containerization (Docker), and orchestration tools (Kubernetes) or similar large scale data sets is a plus.

Posted 1 day ago

Apply

6.0 years

9 - 16 Lacs

Jaipur, Rajasthan, India

On-site

About The Opportunity A high-growth leader in the enterprise SaaS and digital transformation sector is seeking a Senior Software Engineer to join our engineering powerhouse. We build resilient microservices-based platforms and modern front-end solutions that power mission-critical applications for clients worldwide. Leveraging cutting-edge Nest.js and Next.js frameworks, you will architect, implement, and optimize scalable systems to drive innovation and performance. This is a full-time, on-site role based in India. Role & Responsibilities Design, develop, and maintain microservices using Nest.js, ensuring scalability, reliability, and security. Implement responsive and SEO-friendly front-end applications with Next.js and React, optimizing for performance and UX. Collaborate with cross-functional teams to define API contracts (REST/GraphQL), and ensure seamless integration of services. Author clean, maintainable code in TypeScript, and conduct thorough code reviews to enforce best practices and coding standards. Participate in full software development lifecycle: planning, estimating, testing, deployment, and continuous improvement in Agile environments. Optimize CI/CD pipelines using tools like GitHub Actions, Jenkins, Docker, and Kubernetes to automate builds, tests, and deployments. Skills & Qualifications Must-Have 6+ years of software engineering experience, with a focus on full-stack development. Proven expertise in Nest.js for building scalable microservices and APIs. Strong experience with Next.js and React for server-side rendered and static web applications. Proficient in TypeScript, Node.js, and modern JavaScript (ES6+). Solid understanding of RESTful APIs, GraphQL, and microservices architecture patterns. Hands-on experience with containerization (Docker) and orchestration (Kubernetes). Preferred Experience with cloud platforms (AWS, Azure, or Google Cloud) and serverless architectures. Familiarity with message brokers (RabbitMQ, Kafka) and event-driven design. Knowledge of CI/CD tools (Jenkins, GitHub Actions) and infrastructure as code (Terraform). Exposure to testing frameworks (Jest, Mocha) and TDD/BDD methodologies. Understanding of performance tuning, caching strategies, and observability (Prometheus, Grafana). Benefits & Culture Highlights Collaborative, open culture with mentorship and continuous learning opportunities. Competitive compensation package, health benefits, and performance-based bonuses. Vibrant on-site work environment encouraging innovation and work-life balance. Skills: jenkins,docker,ci/cd,restful apis,typescript,graphql,github actions,microservices architecture,software,javascript (es6+),nest.js,code,react,nestjs,next.js,microservices,kubernetes,node.js

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Experience: 5 to 8 years Notice period: Immediate joiners only JD: Extensive handson experience with Red Hat Linux Proficiency in managing Linux services DNS NTP SSH Apache MySQLPostgreSQL etcStrong experience with cloud environments Oracle GCP Familiarity with CICD pipelines and configuration management tools like Ansible Puppet or ChefStrong understanding of networking protocols and troubleshooting TCPIP firewalls VPNs etcProficiency in handling high availability systems load balancing and failover Certifications like Red Hat Certified Engineer RHCE Oracle Linux Certified Professional or similarPreferred Knowledge of cloud platforms Google Oracle Cloud with Linuxbased infrastructure Familiarity with DevOps tools and practicesExperience with Docker Kubernetes and container orchestration Familiarity with SAN NAS and other enterprise storage solutionsLeadership skills and the ability to manage projects Knowledge of ITIL processes and ticketing system Security & Compliance, Troubleshooting

Posted 1 day ago

Apply

0 years

0 Lacs

Telangana, India

On-site

Collaborate with data engineers, data scientists, and stakeholders to understand data requirements, problem statements, and system integrations Utilize, apply & enhance GenAI models using state-of-the-art techniques like transformers, GANs, VAEs, and LLM models Implement and optimize GenAI models for performance, scalability, and efficiency Integrate GenAI models, including LLMs, into production pipelines, applications, and existing analytical solutions Develop user-facing interfaces and APIs to interact with GenAI models, including LLMs Utilize prompt engineering techniques to enhance model performance, including LLM modelsTechnical skills requirementsThe candidate must demonstrate proficiency in, Collaborate with data engineers, data scientists, and stakeholders to understand data requirements, problem statements, system integrations, and RAG application functionalities Use, apply, enhance GenAI models using state-of-the- art techniques like transformers, GANs, VAEs, LLMs (including experience with various LLM architectures and capabilities), and vector representations for efficient data processing Implement and optimize GenAI models for performance, scalability, and efficiency, considering factors like chunking strategies for large datasets and efficient memory management Integrate GenAI models, including LLMs, into production pipelines, applications, existing analytical solutions, and RAG workflows, ensuring seamless data flow and information exchange Develop user-facing interfaces and APIs (RESTful or GraphQL) to interact with GenAI models and RAG applications, providing a user-friendly experience Utilize LangChain and similar tools (e.g., PromptChain) to facilitate efficient data retrieval, processing, and prompt engineering for LLM fine-tuning within RAG applications Apply software engineering principles to develop robust, scalable, maintainable, and production-ready GenAI applications Build and deploy GenAI applications on cloud platforms (AWS, Azure, or GCP), leveraging containerization technologies (Docker, Kubernetes) for efficient resource management Integrate GenAI applications with other applications, tools, and analytical solutions (including dashboards and reporting tools) to create a cohesive user experience and workflow within the RAG ecosystem Continuously evaluate and improve GenAI models and applications based on data, feedback, user needs, and RAG application performance metrics Stay up-to-date with the latest advancements in GenAI research, development, software engineering practices, integration tools, LLM architectures, and RAG functionalities Document code, models, processes, and RAG application design for future reference and knowledge sharing

Posted 1 day ago

Apply

12.0 - 18.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Description We are seeking a seasoned Program Manager to lead strategic initiatives across enterprise-grade Java/J2EE applications deployed on modern hybrid cloud platforms. The ideal candidate will have a strong technical foundation, proven leadership in managing cross-functional teams, and strong background experience with Kubernetes , cloud-native architectures , Microservice Architecture , and Agile delivery models . Roles & Responsibilities Key Responsibilities Lead end-to-end program delivery for enterprise applications built on Java/J2EE stack, and Microservice Architecture. Manage multiple project streams across development, testing, deployment, and support. Collaborate with engineering, DevOps, QA, and business stakeholders to ensure alignment and timely delivery. Drive cloud migration, new development and modernization efforts using platforms like AWS, Azure, GCP or private cloud. Oversee container orchestration and microservices deployment using Kubernetes. Establish and monitor KPIs, SLAs, and program health metrics. Manage risks, dependencies, and change control processes. Ensure compliance with security, governance, and regulatory standards. Facilitate Agile ceremonies and promote continuous improvement. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 12 to 18 years of experience in IT, with at least 5 years in program/project management. Strong background in Java/J2EE enterprise application development, and Microservice Architecture. Strong background with any cloud platforms (AWS, Azure, GCP, or Private Cloud). Proficiency in Kubernetes, Docker, and containerized deployments. Familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code. Excellent communication, stakeholder management, and leadership skills. PMP, PMI-ACP, or SAFe certification is a plus. Mandatory Skills Program Manager, Microservice Based Architecture, Agile Model

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Arbelos Solutions is a world class tech start-up with its presence at various domestic and international locations. Our expertise lies in – • OSS/BSS solution provider • Software & Web Development • Enterprise Solutions • Mobile App Development • Digital Marketing & Transformation • Business support system Job Role: Generative AI Architect Job Location: Remote Experience: 10+ years (including 3 years in GenAI/LLMs) About the Role: We are seeking a highly skilled Generative AI Architect to lead the design, development, and deployment of cutting-edge GenAI solutions across enterprise-grade applications. This role requires deep expertise in LLMs, prompt engineering, and scalable AI system architecture, combined with hands-on experience in MLOps, cloud, and data engineering. Key Responsibilities: ● Design and implement scalable, secure GenAI solutions using large language models (LLMs) such as GPT, Claude, LLaMA, or Mistral. ● Architect Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Weaviate, FAISS, or ElasticSearch. ● Lead prompt engineering and evaluation frameworks for accuracy, safety, and contextual relevance. ● Collaborate with product, engineering, and data teams to integrate GenAI into existing applications and workflows. ● Build reusable GenAI modules (function calling, summarization, Q&A bots, document chat, etc.). ● Leverage cloud-native platforms (AWS Bedrock, Azure OpenAI, Vertex AI) to deploy and optimize GenAI workloads. ● Ensure robust monitoring, logging, and observability across GenAI deployments (Grafana, OpenTelemetry, Prometheus). ● Apply MLOps practices for CI/CD of AI pipelines, model versioning, validation, and rollback. ● Research and prototype emerging trends in GenAI including multi-agent systems, autonomous agents, and fine-tuning. ● Implement security best practices, data governance, and compliance protocols (PII masking, encryption, audit logs). Required Skills & Experience: ● 8+ years of overall experience in AI/ML, with at least 2–3 years focused on LLMs / GenAI. ● Strong programming skills in Python, with frameworks like Transformers (Hugging Face), LangChain, or OpenAI SDKs. ● Experience with Vector Databases (e.g., Pinecone, Weaviate, FAISS, Qdrant). ● Proficiency in cloud platforms: AWS (SageMaker, Bedrock), Azure (OpenAI), GCP (Vertex AI). ● Experience in designing and deploying RAG pipelines, summarization engines, and chat-based apps. ● Familiarity with function calling, tool usage, agents, and LLM orchestration frameworks (LangGraph, AutoGen, CrewAI). ● Understanding of MLOps tools: MLflow, Airflow, Docker, Kubernetes, FastAPI. ● Exposure to prompt injection mitigation, hallucination control, and LLMOps. ● Ability to evaluate GenAI systems using metrics like BERTScore, BLEU, GPTScore. ● Strong communication and documentation skills; ability to lead architecture discussions and mentor engineering teams. Preferred (Nice to Have): ● Experience with fine-tuning open-source LLMs (LLaMA, Mistral, Falcon) using LoRA or QLoRA. ● Knowledge of multi-modal AI (text-image, voice assistants). ● Familiarity with domain-specific LLMs in Healthcare, BFSI, Legal, or EdTech. ● Published work, patents, or open-source contributions in GenAI If you are willing to work for a fast-paced tech start up, then kindly share your profiles or references to a.verma@arbelosgroup.com with the below details – Full Name: Total Experience: Generative AI Architect Current CTC: Expected CTC: Highest Education: Notice Period: Current Location: LinkedIn Profile:

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from Peoplefy! We are looking for a Cloud Data Architect – Pune Are you a seasoned data expert with deep experience in architecting scalable, secure, and high-performance cloud-based data solutions? Join us as a Cloud Data Architect , where you’ll work with cross-functional teams to transform complex business needs into robust data architectures. Experience - 10+ years Notice Period - Immediate joiners What you'll do: Design and implement cloud-based data architectures – data lakes, data pipelines, and warehouses Build optimized data models (star/snowflake schema) to power BI and analytics use cases Work with big data tools like Hadoop, Spark, Kafka, and more Manage both relational (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra) databases Build secure, compliant data solutions in line with GDPR/CCPA Automate infrastructure using Terraform, CloudFormation, and CI/CD tools Collaborate with analytics teams and integrate ML frameworks and visualization tools (Tableau, Power BI) What you bring: Experience in data architecture and engineering Strong hands-on with GCP, AWS, or Azure cloud platforms Proficiency in SQL, Python/Java/Scala Knowledge of containerization (Docker/Kubernetes) and IaC Excellent communication and stakeholder collaboration skills Preferred: Cloud certifications (AWS/GCP/Azure) Experience with large-scale cloud migration projects Interested candidates please share your resumes on amruta.bu@peoplefy.com

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Client: Our Client is an American-based global information technology services company is an American-based global information technology services company that provides digital engineering and technology services for companies in the financial services, healthcare, communications, media, entertainment, travel, manufacturing, and technology industries worldwide. Job Title : Java Backend developer Key Skills : Java, Microservices, Spring boot, AWS Job Locations : Chennai, Bengaluru, Hyderabad Experience : 10+ Years. Employment Type : Full Time Notice Period : Immediate joiner-15 Days Key Qualifications: Strong proficiency in Java programming language, Spring Boot framework using IDEs for development, debugging and testing. Experience of working in distributed processing systems like Kafka. Knowledge of Microservice Architecture and Design Patterns. Knowledge of RDBMS (Oracle/ MSSQL). Hands on experience with ORM like Spring JPA, Hibernate Acquainted with API Design. Solid understanding of software development principles, with strong problem-solving and analytical skills Good to have: Experience with NoSQL DB (Mongo DB) Experince with Docker, Kubernetes, AWS, and CI/CD tools like Jenkins

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About company: Virtusa Corporation is an American-based global information technology services company that provides digital engineering and technology services for companies in the financial services, healthcare, communications, media, entertainment, travel, manufacturing, and technology industries worldwide. Experience :6 to 10 years Notice Period : Immediate joiners. Mandatory Skills : Skills: Java, SpringBoot, Microservice, AWS Job Description: Strong proficiency in Java programming language, Spring Boot framework using IDEs for development, debugging and testing. Experience of working in distributed processing systems like Kafka. Knowledge of Microservice Architecture and Design Patterns. Knowledge of RDBMS (Oracle/ MSSQL). Hands on experience with ORM like Spring JPA, Hibernate Acquainted with API Design. Solid understanding of software development principles, with strong problem-solving and analytical skills Good to have: Experience with NoSQL DB (Mongo DB) Experince with Docker, Kubernetes, AWS, and CI/CD tools like Jenkins

Posted 1 day ago

Apply

2.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Job Title: Azure DevOps Engineer Location: Kolkata, India Experience: 2+ Years About the Role: We are looking for a skilled Azure DevOps Engineer to join our dynamic team in Kolkata. You will design and implement continuous integration pipelines, work closely with cloud platforms (primarily Microsoft Azure), and collaborate with development teams to streamline delivery through Agile practices. Key Responsibilities: Design and implement CI/CD pipelines using Azure DevOps. Manage and optimize cloud infrastructure on Microsoft Azure. Enforce Agile methodologies alongside Azure DevOps tools. Automate code quality checks, assembly testing, and performance testing as part of the CI pipeline. Collaborate with senior developers to design and implement scalable development architectures. Required Skills & Experience: Minimum 2 years of experience as a DevOps Engineer. Hands-on experience with Azure DevOps and Azure cloud services. Strong knowledge of CI/CD pipeline design and implementation. Proficient with ASP.NET MVC, ASP.NET Core, or Node.js. Experience with test automation tools like SonarQube and UFT. Familiarity with Azure Kubernetes Service (AKS) is a plus. Experience working within Agile delivery teams. Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). Notice period-ONLY IMMEDIATE JOINERS ARE PREFERRED. What We Offer: Opportunity to work on cutting-edge cloud technologies. Supportive and collaborative team environment. Growth and career advancement opportunities. Interested candidates, please apply with your updated resume to nikita.bhattacharya@ubique-systems.com

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities: Architectural Leadership Design and document scalable, reliable, and maintainable architectures for Gen AI applications. Ensure solutions meet production-grade standards and enterprise requirements. Technical Decision Making Evaluate trade-offs in technology choices, design patterns, and frameworks. Align decisions with Gen AI best practices and software engineering principles. Team Guidance Mentor and guide architects and engineers. Foster a collaborative, innovative, and high-performance development environment. Hands-On Development Actively contribute to prototyping and implementation using C and Python. Drive research and development of core AI Gateway components. Product Development Mindset Build a responsible and scalable AI Gateway considering: Cost efficiency Security and compliance Upgradeability Ease of use and integration Required Qualifications: Technical Expertise Extensive experience in API-based projects and full lifecycle deployment of Gen AI/LLM applications. Strong hands-on proficiency in C and practical experience with Python. Cloud & DevOps Expertise in Docker, Kubernetes, and OpenShift for containerization and orchestration. Working knowledge of: Azure AI Services: OpenAI, AI Search, Document Intelligence AWS Services: EKS, SageMaker, Bedrock Security & Access Management Familiarity with Okta for secure identity and access management. LLM & Gen AI Tools Experience with LangChain, LlamaIndex, and OpenAI SDKs in C. Monitoring & Troubleshooting Proven ability to monitor, trace, and debug complex distributed AI systems. Personal Attributes: Strong leadership and mentorship capabilities. Excellent communication skills for both technical and non-technical audiences. Problem-solving mindset with attention to detail. Passion for advancing AI technologies in production environments. Preferred Experience: Prior leadership in large-scale, production-grade AI initiatives. Experience in enterprise technology projects involving Gen AI

Posted 1 day ago

Apply

8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Test automation JD 8+ years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Who we are Mindtickle is the market-leading revenue productivity platform that combines on-the-job learning and deal execution to get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product. We’re honoured to be recognized as a Leader in the first-ever Forrester Wave™: Revenue Enablement Platforms, Q3 2024! Job Snapshot As a TPO-II/ Engineering Manager for the DevOps and SRE functions at Mindtickle, you will lead a mission-critical team that sits at the heart of our engineering organization. From building pipelines to production observability, cost governance to incident response, the DevOps/SRE team plays a central role in enabling engineering speed, reliability, and scale. This role gives you the opportunity to shape the future of how Mindtickle builds and operates software by evolving our tooling, maturing our on-call and release processes, strengthening platform security, and driving a culture of engineering excellence and operational ownership. Key Problem Areas Evolving DevOps from a Support Function to a Strategic Enabler: The DevOps team is often the first responder for many horizontal issues, but we want more. You’ll drive long-term investments in automation, self-service, and reliability to ensure we’re not just solving problems but preventing them. Standardizing Logging & Monitoring at Scale: We’ve made strong progress on adopting modern observability tools like Loki. You’ll drive the consolidation and rollout of a centralized logging, alerting, and monitoring stack to support spiky traffic, tenant isolation, and actionable alerts. Creating a Culture of Cost Ownership: You’ll own cost governance across engineering, setting up the right visibility, processes, and rituals to ensure teams understand, predict, and optimize their infra usage, with DevOps as an active partner rather than a passive reporter. Reducing Incident Fatigue & Improving Recovery: You'll help streamline our on-call and incident response workflows, reduce noise, and drive proactive fixes—so we can prevent outages before they hit users and respond with speed and clarity when they do. Bringing Clarity to Ownership & Prioritization: You’ll build systems that separate ad hoc fire-fighting from strategic project execution-ensuring visibility, accountability, and bandwidth control across the team’s responsibilities. Building Stronger Team Foundations: You'll foster a transparent, collaborative, and high-trust team environment, where operational knowledge is distributed, wins are celebrated, and DevOps becomes a sought-after team for growth and impact. What’s in it for you? Lead the planning, prioritization, and execution of projects and operational tasks within the DevOps team. Drive the consolidation and standardization of our observability, logging, monitoring, and release infrastructure. Establish clear operational boundaries and runbooks between DevOps and other engineering teams. Partner with stakeholders to drive cost governance, forecasting, and proactive optimization initiatives. Improve the team’s execution health through regular grooming, sprint rituals, and retrospective learnings. Guide incident response, RCA follow-through, and prevention planning in collaboration with the broader org. Develop and mentor team members, ensuring distributed ownership, cross-skill coverage, and technical depth. Drive security and access control hygiene in partnership with the platform, compliance, and security teams. We’d love to hear from you, if you: 8+ years of experience in DevOps/SRE/Infrastructure engineering roles, with at least 2 years in a leadership or technical ownership capacity. Strong understanding of CI/CD pipelines, infra-as-code, Kubernetes, Helm, monitoring systems (Prometheus, Loki, Grafana), and logging infrastructure. Proven experience driving platform/tooling decisions and leading cross-functional prioritization. Deep appreciation for clean separation of responsibilities, scalable ops, and enabling developers through great tooling. Experience setting up or maturing on-call, release, or incident management processes. Demonstrated ability to lead a team with empathy, foster psychological safety, and bring visibility to wins and work done. Familiarity with cost tracking across cloud (AWS), Snowflake, or observability tooling rollouts. Why this role matters The DevOps team is the connective tissue across Mindtickle’s engineering org. Every deployment, every rollback, every alert, and every cost dashboard runs through this team. Your leadership will help evolve DevOps from a support function into a multiplier, boosting engineering productivity, enabling scale, and building a stronger, more resilient tech backbone for the company. Our culture & accolades As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks and many opportunities for growth. Our culture reflects our employee's globally diverse backgrounds along with our commitment to our customers, and each other, and a passion for excellence. We live up to our values, DAB, Delight your customers, Act as a Founder, and Better Together. Mindtickle is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law. Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position Senior Engineer/Technical Lead (DevOps Engineer - Azure) Job Description Key Responsibilities: Key Responsibilities: Azure Cloud Management: Design, deploy, and manage Azure cloud environments. Ensure optimal performance, scalability, and security of cloud resources using services like Azure Virtual Machines, Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure SQL Database. Automation & Configuration Management: Use Ansible for configuration management and automation of infrastructure tasks. Implement Infrastructure as Code (IaC) using Azure Resource Manager (ARM) templates or Terraform. Containerization: Implement and manage Docker containers. Develop and maintain Dockerfiles and container orchestration strategies with Azure Kubernetes Service (AKS) or Azure Container Instances. Server Administration: Administer and manage Linux servers. Perform routine maintenance, updates, and troubleshooting. Scripting: Develop and maintain Shell scripts to automate routine tasks and processes. Helm Charts: Create and manage Helm charts for deploying and managing applications on Kubernetes clusters. Monitoring & Alerting: Implement and configure Prometheus and Grafana for monitoring and visualization of metrics. Use Azure Monitor and Azure Application Insights for comprehensive monitoring, logging, and diagnostics. Networking: Configure and manage Azure networking components such as Virtual Networks, Network Security Groups (NSGs), Azure Load Balancer, and Azure Application Gateway. Security & Compliance: Implement and manage Azure Security Center and Azure Policy to ensure compliance and security best practices. Required Skills and Qualifications: Experience: 5+ years of experience in cloud operations, with a focus on Azure. Azure Expertise: In-depth knowledge of Azure services, including Azure Virtual Machines, Azure Kubernetes Service, Azure App Services, Azure Functions, Azure Storage, Azure SQL Database, Azure Monitor, Azure Application Insights, and Azure Security Center. Automation Tools: Proficiency in Ansible for configuration management and automation. Experience with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Containerization: Hands-on experience with Docker for containerization and container management. Linux Administration: Solid experience in Linux server administration, including installation, configuration, and troubleshooting. Scripting: Strong Shell scripting skills for automation and task management. Helm Charts: Experience with Helm charts for Kubernetes deployments. Monitoring Tools: Familiarity with Prometheus and Grafana for metrics collection and visualization. Networking: Experience with Azure networking components and configurations. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Communication: Excellent communication skills, both written and verbal, with the ability to work effectively in a team environment. Preferred Qualifications: Certifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect) are a plus. Additional Tools: Experience with other cloud platforms (AWS, GCP) or tools (Kubernetes, Terraform) is beneficial. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services

Posted 1 day ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Backend Developer 3 - 34365 Location: Chennai (On-site) Type: Full-Time Budget: Open Notice Period: Immediate Joiners Only Assessment: Full Stack Back-End Java (HackerRank or equivalent) Position Summary An experienced Back-End Software Development Engineer is required to build and maintain robust, scalable server-side applications. The role focuses on developing APIs, microservices, and database integrations while working in an agile, collaborative environment. The ideal candidate will have deep expertise in Java, modern frameworks, and cloud-native deployment practices, with added advantage for AI/ML experience. Key Responsibilities Engage directly with stakeholders to understand use cases and translate them into technical solutions. Design, develop, and deploy new server-side functionality using Java, Spring Boot, and related tools. Implement RESTful APIs and microservices for scalable enterprise applications. Collaborate with front-end developers to ensure seamless integration of UI components. Deploy and manage applications on cloud or on-prem infrastructure, ensuring high availability and performance. Develop and maintain databases using Oracle, MySQL, MongoDB, or similar technologies. Lead the implementation of modern DevOps practices including CI/CD, monitoring, and automated testing. Optimize backend architecture to improve reliability, security, and performance. Incorporate security best practices such as data encryption and anonymization. Participate in agile ceremonies, code reviews, paired programming, and demos. Required Skills & Experience 6+ years of hands-on experience in backend and full-stack development. Proficiency in Java, JavaScript, Spring Boot, React/Angular, SQL/Postgres. Experience with test-driven development (TDD) and agile methodologies. Solid understanding of CI/CD pipelines, especially using tools like Jenkins. 2+ years of working with AI/ML, including: Building ML models using open-source frameworks Using tools like Python, Hadoop, Kafka, BigQuery, Kubernetes Experience in building AI/ML platforms and deploying ML solutions at scale. Preferred Experience Exposure to the automotive industry or similar domains Experience with onshore-offshore delivery models Knowledge of test automation frameworks and practices Educational Qualifications Bachelor’s Degree in Computer Science, Engineering, or a related field (Required) Master’s Degree (Preferred) Additional Notes The role includes designing and developing interfaces with various web and client-server applications. Strong focus on collaborative coding practices like paired programming and clean code principles. Opportunity to lead technical design discussions and mentor junior developers. Skills: restful apis,oracle,microservices,spring boot,ai/ml,agile methodologies,angular,kubernetes,building,javascript,python,react,java,sql,test-driven development,ml,postgres,mysql,bigquery,hadoop,agile,ci/cd,kafka,mongodb

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: As an L3 AWS Support Engineer, you will be responsible for providing advanced technical support for complex AWS-based solutions. You will troubleshoot and resolve critical issues, architect solutions, and provide technical leadership to the support team. Key Responsibilities: Architectural Oversight: Design, implement, and optimize cloud architectures for performance, security, and scalability Conduct Well-Architected Framework reviews Complex Troubleshooting: Resolve critical issues involving hybrid environments, multi-region setups, and service interdependencies Debug Lambda functions, API Gateway configurations, and other advanced AWS services Security: Implement advanced security measures like GuardDuty, AWS WAF, and Security Hub Conduct regular security audits and compliance checks (e.g., SOC2, GDPR) Automation & DevOps: Develop CI/CD pipelines using CodePipeline, Jenkins, or GitLab Automate infrastructure scaling, updates, and monitoring workflows Automate the provisioning of EKS clusters and associated AWS resources using Terraform or CloudFormation Develop and maintain Helm charts for consistent application deployments Implement GitOps workflows Disaster Recovery & High Availability: Design and test failover strategies and disaster recovery mechanisms for critical applications Cluster Management and Operations Design, deploy, and manage scalable and highly available EKS clusters Manage Kubernetes objects like Pods, Deployments, StatefulSets, ConfigMaps, and Secrets Implement and manage Kubernetes resource scheduling, scaling, and lifecycle management Team Leadership: Provide technical guidance to Level 1 and 2 engineers Create knowledge-sharing sessions and maintain best practices documentation Cost Management: Implement resource tagging strategies and cost management tools to reduce operational expenses Required Skills and Qualifications: Technical Skills: Deep understanding of AWS core services and advanced features Strong expertise in AWS automation, scripting (Bash, Python, PowerShell), and CLI Experience with AWS CloudFormation and Terraform Knowledge of AWS security best practices, identity and access management, and networking Capacity Planning: Analyze future resource needs and plan capacity accordingly Performance Optimization: Identify and resolve performance bottlenecks Migration and Modernization: Lead complex migration and modernization projects Soft Skills: Excellent problem-solving and analytical skills Strong communication and interpersonal skills Ability to work independently and as part of a team Customer-focused approach Certifications (Preferred): AWS Certified Solutions Architect - Professional AWS Certified DevOps Engineer - Professional AWS Certified Security - Specialty

Posted 1 day ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies