Home
Jobs

15631 Kubernetes Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

11 - 12 Lacs

India

Remote

GlassDoor logo

Job Title: .NET Fullstack Developer (.NET + React) – Remote Experience Required: 8+ Years Location: Remote Job Description: We are looking for a highly experienced and self-driven .NET Fullstack Developer with expertise in .NET technologies and ReactJS. The ideal candidate will have a strong background in full-stack development, microservices architecture, and cloud environments, with a passion for delivering high-quality, scalable applications. Key Responsibilities: Design, develop, and maintain full-stack .NET applications in a fast-paced, agile environment. Write clean, efficient, and well-tested code using C#, ASP.NET, Web API, and ReactJS. Work with SQL Server (2012 and above), Windows Services, and web services. Integrate and consume RESTful APIs and web services (SOAP, WSDL, UDDI, BPEL). Implement and manage Docker containers and Kubernetes clusters. Work with cloud platforms (Azure/AWS/GCP) for deployment and scalability. Collaborate with QA to implement TDD and BDD practices using JUnit, Selenium, Cucumber, and Gherkin. Participate in daily standups, sprint planning, and code reviews. Must-Have Skills: 8+ years of hands-on experience with the .NET stack (C#, ASP.NET, Web API, etc.). 3+ years of experience in JavaScript frameworks/libraries (ReactJS, Angular, Bootstrap, jQuery). Proficiency in SQL Server, including stored procedures and performance optimization. Experience with Docker & Kubernetes in production environments. Strong understanding of API integration, web services, and SOA standards. Experience with unit testing and test automation frameworks (TDD/BDD). Good to Have: Experience with Azure cloud services (CosmosDB, AKS, Key Vault, etc.). Familiarity with advanced XML technologies and XSD. Working knowledge of relational and NoSQL databases. Interview Will Focus On: .NET (4/5) ReactJS / Angular (4/5) API Integration (3.5/5) Docker & Kubernetes experience Job Types: Full-time, Contractual / Temporary Pay: ₹1,155,846.59 - ₹1,212,925.91 per year Benefits: Flexible schedule Work from home Schedule: Day shift Fixed shift Monday to Friday Morning shift Application Question(s): Do you have at least 8 years of hands-on experience working with .NET technologies (e.g., C#, ASP.NET, Web API) and at least 3 years with ReactJS or Angular? Which of the following technologies do you have practical experience with? Docker, Kubernetes, API Integration (SOAP/REST), Azure (e.g., AKS, CosmosDB, Key Vault), ReactJS, Angular Work Location: In person Speak with the employer +91 7880090179

Posted 1 day ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Schmooze At Schmooze, we are redefining how people connect and match online. Through cutting-edge AI and machine learning, we’ve built a unique meme-driven recommendation and matching system that keeps users engaged in meaningful and entertaining ways. We don’t just want to build another dating app—we are here to revolutionize how people form connections, one meme at a time. Founded by Vidya Madhavan and Abhinav Anurag , alumni of Stanford and BITS Pilani , Schmooze has been featured in leading publications such as TechCrunch, Forbes, and The Economic Times for our innovative approach to matchmaking. Our app has already garnered millions of interactions worldwide , proving that memes are more than just jokes—they’re a powerful way to connect people. Why You Should Join Us At Schmooze, we move fast, solve tough problems, and refuse to settle for mediocrity. If you're a backend engineer who thrives on tackling world-changing challenges , you'll fit right in. Impact Millions: Your work will directly shape how millions of users interact, match, and build connections. Cutting-Edge Tech: Work with a powerful real-time, data-driven stack, solving complex scalability and performance problems. High Ownership & No Bureaucracy: No unnecessary meetings, no red tape—just a close-knit team working relentlessly to push boundaries. Data-Driven, Results-Oriented: We iterate fast, measure everything, and pivot when needed to build the best product possible. What You’ll Do Scale our infrastructure to support millions of meme-driven interactions. Design, build, and optimize high-performance APIs. Handle real-time data processing using state-of-the-art tools. Simulate edge cases to rigorously test your code. Document your work and improve engineering best practices. Solve real-time customer problems and continuously enhance the user experience. Tech Stack You’ll Work With Backend: Python, Async Programming (asyncio), Sanic Framework Databases & Storage: Postgres, Elasticsearch, Starrocks, Redis Messaging & Streaming: Kafka, Flink, gRPC Infrastructure: Kubernetes, AWS Testing & Performance: Locust, Claude The Challenges You’ll Tackle Work on our State-of-the-art Meme Recommendation Algorithm that ensures maximum user engagement. Contribute to the Matching Algorithm that ensures fair and meaningful connections, not just benefiting the top 5% of users ( looking at you, Tumble, Bimber, Cringe—you cheated us all along! ). Optimize our backend for ultra-low latency matching and messaging. Build robust systems that can scale seamlessly as our user base explodes. What We Look For Experience: 0-2 years of relevant experience. Any prior start-up experience is a bonus. Hunger to work on world-changing problems. Deep expertise in Python backend development. Strong experience with scalable architectures, distributed systems, and async processing. A hacker’s mindset —ability to move fast, break things, and fix them better. Strong ownership instincts. You don’t wait for directions; you take charge. A bias for action —we don’t believe in waiting around. Our Culture 🏡 Homely Vibe – We are a small, close-knit team that supports each other. 📈 Career Growth is a KPI – Your growth matters to us as much as our company’s success. 🚀 No Hierarchy – The best ideas win, not job titles. 🤝 One Team, One Place – We work together, learn together, and win together. 💰 Spot Bonuses – Out-of-the-box thinking and high ownership don’t go unnoticed. 🔥 Leadership at Every Level – We believe that a small, elite team can only succeed if every single member is a leader in their own right. If you have the hunger, the drive, and the skills, let’s build something legendary together. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity “A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers.” - VP, DevOps Engineering What You’ll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What We’re Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO? At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today – Big Data analytics. You’ll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: Credit Scoring — FICO® Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security — 4 billion payment cards globally are protected by FICO fraud systems. Lending — 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO’s solutions, placing us among the world’s top 100 software companies by revenue. We help many of the world’s largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people – just like you – who thrive on the collaboration and innovation that’s nurtured by a diverse and inclusive environment. We’ll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we’re proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don’t meet all stated qualifications. While our qualifications are clearly related to role success, each candidate’s profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at https://www.fico.com/en/privacy-policy Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Delta Tech Hub: Delta Air Lines (NYSE: DAL) is the U.S. global airline leader in safety, innovation, reliability and customer experience. Powered by our employees around the world, Delta has for a decade led the airline industry in operational excellence while maintaining our reputation for award-winning customer service. With our mission of connecting the people and cultures of the globe, Delta strives to foster understanding across a diverse world and serve as a force for social good. Delta has fast emerged as a customer-oriented, innovation-led, technology-driven business. The Delta Technology Hub will contribute directly to these objectives. It will sustain our long-term aspirations of delivering niche, IP-intensive, high-value, and innovative solutions. It supports various teams and functions across Delta and is an integral part of our transformation agenda, working seamlessly with a global team to create memorable experiences for customers. KEY RESPONSIBILITIES: Designing, prototyping and demonstrating new features and components of front-end and back-end to users to ensure compliance with requirements Collaborate with the technical teams, business teams, and product managers to ensure that the code that is developed meets their vision. Design and code the solutions to meet functional and technical requirements Align to Security / Compliance frameworks and controls requirements. Own quality posture. Write automated tests, ideally before writing code. Develop delivery pipelines and automated deployment scripts. Configure services, such as databases and monitoring. Implement Service Reliability Engineering. Fix problems from the development phase through the production phase, which requires being on call for production support. Provide production support for portfolio applications and participate in on-call pager duty on a rotational basis WHAT YOU NEED TO SUCCEED (MINIMUM QUALIFICATIONS): Bachelor’s degree in computer science, Information Systems or related technical field is required At least 5 years of hands-on experience as a Software Engineer or related technical engineering capacity. Strong programming experience in Java, Spring boot, Qaurkus, NoSQL, Relational Databases. Solid understanding of microservice architecture, serverless architecture and security. Experience implementing API’s (REST) via microservices Experience engineering software within an Amazon Web Services (AWS) cloud infrastructure or other prominent enterprise cloud provider is required. Experience building applications with Containers, Kubernetes, RedHat OpenShift, Code Build / Code Pipeline, API Gateways, Lambdas, S3, AWS SDK/CLI Fundamental Awareness of Application Security principles and 12-factor application development principles is required. Experience working with DevSecOps principles, practices and tools in an enterprise technology environment is required. Experience with source control, build tools and GIT (GitHub, Bitbucket or other) is required. Experience with application logging and monitoring technologies such as Dynatrace, Sumo Logic, CloudWatch, Splunk etc Professional experience working with Agile Methodologies is required. Working knowledge of the full Software Development Lifecycle, building CI/CD pipelines and practicing Test Driven Development is a requirement. Embraces diverse people, thinking and styles. Consistently makes safety and security, of self and others, the priority. WHAT WILL GIVE YOU A COMPETITIVE EDGE (PREFERRED QUALIFICATIONS): AWS Certified Solutions Architect or Developer certification Knowledge and experience with the Travel Industry a plus Communication Skills - The ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably for the audience. Integrity and Trust - Involves being widely trusted, being seen as a direct, truthful individual, can present the unvarnished truth in an appropriate and helpful manner, keeps confidences, admits mistakes, and doesn't misrepresent him/herself for personal gain Teamwork - Involves working well in a collaborative setting, supporting work team by volunteering for and completing assignments, acting as a positive team member by contributing to discussions, developing and maintaining both formal and informal relationships enterprise-wide, defines success in terms of the entire team through mentoring and knowledge transfer. Technical Expertise - Involves demonstrating a commitment to increasing knowledge and skills in current technical/functional area, keeping up to date on technical developments, staying informed as to industry practices, knowing how to apply relevant technical processes to appropriate business needs. Solution Oriented - Maintains a positive attitude towards coming up with solutions and developing new approaches, doesn't let distractions get in the way, isn't overwhelmed with problems / issues. Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Position: Are you a passionate backend engineer looking to make a significant impact? Join our cross-functional, distributed team responsible for building and maintaining the core backend functionalities that power our customers. You’ll be instrumental in developing scalable and robust solutions, directly impacting on the efficiency and reliability of our platform. This role offers a unique opportunity to work on cutting-edge technologies and contribute to a critical part of our business, all within a supportive and collaborative environment. Role: Junior .Net Engineer Location: Hyderabad Experience: 3 to 5 years Job Type: Full Time Employment What You'll Do: Implement feature/module as per design and requirements shared by Architect, Leads, BA/PM using coding best practices Develop, and maintain microservices using C# and .NET Core perform unit testing as per code coverage benchmark. Support testing & deployment activities Micro-Services - containerized micro-services (Docker/Kubernetes/Ansible etc.) Create and maintain RESTful APIs to facilitate communication between microservices and other components. Analyze and fix defects to develop high standard stable codes as per design specifications. Utilize version control systems (e.g., Git) to manage source code. Requirement Analysis: Understand and analyze functional/non-functional requirements and seek clarifications from Architect/Leads for better understanding of requirements. Participate in estimation activity for given requirements. Coding and Development: Writing clean and maintainable code using best practices of software development. Make use of different code analyzer tools. Follow TTD approach for any implementation. Perform coding and unit testing as per design. Problem Solving/ Defect Fixing: Investigate and debug any defect raised. Finding root causes, finding solutions, exploring alternate approaches and then fixing defects with appropriate solutions. Fix defects identified during functional/non-functional testing, during UAT within agreed timelines. Perform estimation for defect fixes for self and the team. Deployment Support: Provide prompt response during production support Expertise You'll Bring: Language – C# Visual Studio Professional Visual Studio Code .NET Core 3.1 onwards Entity Framework with code-first approach Dependency Injection Error Handling and Logging SDLC Object-Oriented Programming (OOP) Principles SOLID Principles Clean Coding Principles Design patterns API Rest API with token-based Authentication & Authorization Postman Swagger Database Relational Database: SQL Server/MySQL/ PostgreSQL Stored Procedures and Functions Relationships, Data Normalization & Denormalization, Indexes and Performance Optimization techniques Preferred Skills Development Exposure to Cloud: Azure/GCP/AWS Code Quality Tool – Sonar Exposure to CICD process and tools like Jenkins etc., Good understanding of docker and Kubernetes Exposure to Agile software development methodologies and ceremonies Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less

Posted 1 day ago

Apply

14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Devops Manager Location: Ahmedabad/Hyderabad Exp: 14+ years Experience Required: 14+ years total experience, with 4–5 years in managerial roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement Show more Show less

Posted 1 day ago

Apply

13.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Release Manager – Tools & Infrastructure Location: Hyderabad Experience Level: 13 years+ Department: Engineering / DevOps Reporting To: Head of DevOps / Engineering Director About the Role: We are seeking a hands-on Release Manager with strong DevOps and Infrastructure knowledge to oversee software release pipelines, tooling, and automation processes across distributed systems. The ideal candidate will be responsible for managing releases, ensuring environment readiness, coordinating with engineering, SRE, and QA teams, and driving tooling upgrades and ecosystem health. This is a critical role that bridges the gap between development and operations—ensuring timely, stable, and secure delivery of applications across environments. Key Responsibilities: Release & Environment Management: Manage release schedules, timelines, and coordination with multiple delivery streams. Own the setup and consistency of lower environments and production cutover readiness. Ensure effective version control, build validation, and artifact management across CI/CD pipelines. Oversee rollback strategies, patch releases, and post-deployment validations. Toolchain Ownership: Manage and maintain DevOps tools such as Jenkins, GitHub Actions, Bitbucket, SonarQube, JFrog, Argo CD, and Terraform. Govern container orchestration through Kubernetes and Helm. Maintain secrets and credential hygiene through HashiCorp Vault and related tools. Infrastructure & Automation: Work closely with Cloud, DevOps, and SRE teams to ensure automated and secure deployments. Leverage GCP (VPC, Compute Engine, GKE, Load Balancer, IAM, VPN, GCS) for scalable infrastructure. Ensure adherence to infrastructure-as-code (IaC) standards using Terraform and Helm charts. Monitoring, Logging & Stability: Implement and manage observability tools such as Prometheus, Grafana, ELK, and Datadog. Monitor release impact, track service health post-deployment, and lead incident response if required. Drive continuous improvement for faster and safer releases by implementing lessons from RCAs. Compliance, Documentation & Coordination: Use Jira, Confluence, and ServiceNow for release planning, documentation, and service tickets. Implement basic security standards (OWASP, WAF, GCP Cloud Armor) in release practices. Conduct cross-team coordination with QA, Dev, CloudOps, and Security for aligned delivery. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Talent500 is hiring for one of our client About American Airlines: To Care for People on Life's Journey®. We have a relentless drive for innovation and excellence. Whether you're engaging with customers at the airport or advancing our IT infrastructure, every team member plays a vital role in shaping the future of travel. At American’s Tech Hubs, we tackle complex challenges and pioneer cutting-edge technologies that redefine the travel experience. Our vast network and diverse customer base offer unique opportunities for engineers to solve real-world problems on a grand scale. Join us and immerse yourself in a dynamic, tech-driven environment where your creativity and unique strengths are celebrated. Experience the excitement of being at the forefront of technological innovation, where every day brings new opportunities to make a meaningful impact. About Tech Hub in India: American’s Tech Hub in Hyderabad, India, is our newest location and home to team members who drive technical innovation and engineer unrivalled digital products to best serve American’s customers and team members. With U.S. tech hubs in Dallas-Fort Worth, Texas and Phoenix, Arizona, our new location in Hyderabad, India, positions American to deliver industry-leading technology solutions that create a world-class customer experience. Why you will love this job: As one diverse, high-performing team dedicated to technical excellence, you will focus relentlessly on delivering unrivaled digital products that drive a more reliable and profitable airline. The Software domain refers to the area within Information Technology that focuses on the development, deployment, management, and maintenance of software applications that support business processes and user needs. This includes development, application lifecycle management, requirement analysis, QA, security & compliance, and maintaining the applications and infrastructure. What you will do: As noted above, this list is intended to reflect the current job but there may be additional functions that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations. Writes, tests, and documents technical work products (e.g., code, scripts, processes) according to organizational standards and practices Devotes time to raising the quality and craftsmanship of products and systems Conducts root cause analysis to identify domain level problems and prescribes action items to mitigate Designs self-contained systems within a team's domain, and leads implementations of significant capabilities in existing systems Coaches team members in the execution of techniques to improve reliability, resiliency, security, and performance Decomposes intricate and interconnected designs into implementations that can be effectively built and maintained by less experienced engineers Anticipates trouble areas in systems under development and guides the team in instrumentation practices to ensure observability and supportability Defines test suites and instrumentation that ensures targets for latency and availability are being consistently met in production Leads through example by prioritizing the closure of open vulnerabilities Evaluates potential attack surfaces in systems under development, identifies best practices to mitigate, and guides teams in their implementation Leads team in the identification of small batches of work to deliver the highest value quickly Ensures reuse is a first-class consideration in all team implementations and is a passionate advocate for broad reusability Formally mentors teammates and helps guide them to and along needed learning journeys Observes their environment and identifies opportunities for introducing new approaches to problems All you will need for success: Minimum Qualifications - Education & Prior Job Experience: Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training 3+ years of experience designing, developing, and implementing large-scale solutions in production environments Master's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training Preferred Qualifications - Education & Prior Job Experience: Airline Industry experience Mandatory Skills: Java / Python, Selenium / TestNG / Postman, Load Runner (load testing/ Performance monitoring) Skills, Licenses & Certifications: Proficiency with the following technologies: Programming Languages: Java, Python, C#, Javascript / Typescript Frameworks: Spring / Spring Boot, FastAPI Front End Technologies: Angular / React Deployment Technologies: Kubernetes, Docker Source Control: GitHub, Azure DevOps CICD: GitHub Actions, Azure DevOps Data management: PostgreSQL, MongoDB, Redis Integration / APIs Technologies: Kafka, REST, GraphQL Cloud Providers such as Azure and AWS Test Automation: Selenium, TestNG, Postman, SonarQube, Cypress, JUnit / NUnit / PyTest, Cucumber, Playwright, Wiremock / Mockito / Moq Ability to optimize solutions for performance, resiliency and reliability while maintaining an eye toward simplicity Ability to concisely convey ideas verbally, in writing, in code, and in diagrams Proficiency in object-oriented design techniques and principles Proficiency in Agile methodologies, such as SCRUM Proficiency in DevOps Toolchain methodologies, including Continuous Integration and continuous deployment Language, Communication Skills, & Physical Abilities: Ability to effectively communicate both verbally and written with all levels within the organization Physical ability necessary to safely and successfully perform the essential functions of the position, with or without any legally required reasonable accommodations that do not pose an undue hardship. Note: If the Company has reason to question an employee’s physical ability to safely and/or successfully perform the position’s essential job functions, the HR team generally will engage in an interactive process to determine whether a reasonable accommodation is appropriate. HR (working with the operation) ordinarily first speaks with the team member directly and they mutually identify the physical demands of the job that are or may be impacted by the employee’s obvious or known condition. Then, if necessary, HR would request medical documentation from the team member’s treating physician or others to confirm the employee’s ability to perform those essential job functions safely and successfully. Show more Show less

Posted 1 day ago

Apply

8.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less

Posted 1 day ago

Apply

162.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – We are looking for a highly experienced Senior Developer with a strong background in Python, AI/ML, Generative AI, and experience with Azure OpenAI, Large Language Models (LLMs), cloud platforms like Azure and AWS, and various databases to join our dynamic team. The ideal candidate will have a proven track record of developing and deploying advanced AI solutions, with a focus on leveraging Generative AI techniques to drive innovation and efficiency. Job Title - Technical Specialist Location: Pune Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities - AI/ML Development: Design, develop, and implement advanced AI/ML models and algorithms to solve complex problems and enhance business processes. Generative AI Solutions: Utilize Generative AI techniques (e.g., GANs, VAEs) to create innovative applications and improve existing systems. Python Programming: Write clean, efficient, and scalable code in Python, using libraries such as TensorFlow, PyTorch, scikit-learn, and others. Data Analysis and Modeling: Analyze large datasets to extract insights, build predictive models, and support data-driven decision-making. Azure OpenAI and LLMs: Develop and deploy AI solutions using Azure OpenAI services and Large Language Models (LLMs) to enhance capabilities and performance. Cloud Platforms: Utilize cloud platforms like Azure and AWS for deploying and managing AI/ML solutions. Database Management: Work with various databases, including SQL, MongoDB, NoSQL, and vector databases, to store, manage, and retrieve data efficiently. Collaboration: Work closely with cross-functional teams, including data scientists, engineers, and product managers, to understand requirements and deliver high-quality AI solutions. Mentorship: Provide technical guidance and mentorship to junior developers and team members. Continuous Improvement: Stay updated with the latest advancements in AI/ML, Generative AI, Azure OpenAI, LLMs, cloud technologies, and database management, and apply them to enhance existing solutions and develop new ones. Documentation: Document AI models, algorithms, and processes, and provide regular reports on project progress and outcomes. Required Qualifications: Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 5-6 years of experience in AI/ML development, with a focus on Python, Generative AI, and Azure OpenAI. Technical Skills: Proficiency in Python and relevant libraries (TensorFlow, PyTorch, scikit-learn, etc.). Extensive experience with Generative AI techniques (GANs, VAEs, etc.). Strong understanding of machine learning algorithms, data analysis, and model deployment. Experience with Azure OpenAI services and Large Language Models (LLMs). Proficiency in cloud platforms like Azure and AWS. Experience with databases such as SQL, MongoDB, NoSQL, and vector databases. Familiarity with containerization (Docker, Kubernetes). Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to lead and mentor a team. Proactive and self-motivated with a passion for innovation. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role - AWS Java Developer Experience - 4-8 years Location - PAN India Required Skill : Primary; AWS+Java Sprintboot- Secondary- NodeJs , Typescript ,JSLT, AWS (Various Services) ,Git, Maven, Docker, New Relic, SQL, DBA JD : AWS Java Node JS Primary Required Skills Highly proficient in Java and NodeJS. Must have hands-on experience and deep understanding of Cloud Technologies: Microservices/API, AWS, IAM, S3, EFS, Amazon SQS, Amazon SNS, AWS APIs, AWS CLI, Amazon Kinesis, Apache Kafka, CloudFormation, Serverless Good understanding of relational databases like MySQL, PostgreSQL. Exposure to NoSQL systems like Redis/Mongo DB is a plus. Good understanding of web technologies such as JavaScript, HTML5, CSS. Good understanding of search platform such as Elastic Search is required A good understanding of Agile development methodologies. Hands on experience in improving MySQL queries and server response time is must. Good understanding of version control tools like Git, Subversion is required. Good understanding of Docker, Kubernetes, Jenkins, CI/CD Tools. Familiarity of working with TDD in JS with the help of frameworks like Jasmine, Mocha, Chai, Karma etc. is a plus. Excellent Analytical skills Good verbal and written communication Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Hi, We have an excellent job opportunity for a Software Development Engineer role at our organization, People Tech Group . Job Description: Job Title: SDE Experience: 3+ Years Location: Hyderabad Job Type: Full-Time Job Summary: We are seeking a talented Java Developer with expertise in building robust, scalable backend systems and proficiency in AWS Cloud services. While backend development is the primary focus, candidates with experience in frontend technologies will be given an advantage. This role offers an opportunity to work on end-to-end application development and collaborate with a dynamic team to deliver high-quality solutions. Key Responsibilities: Design, develop, and maintain backend services and APIs using Java and related frameworks. Data Structures and Algorithms plus Design Patterns. Leverage AWS cloud services (e.g., EC2, S3, RDS, Lambda) to build scalable and reliable systems. Collaborate with cross-functional teams to define system architectures and implement solutions. Ensure high performance, security, and responsiveness of applications. Debug and resolve backend issues, ensuring code quality and maintainability. Contribute to frontend development tasks if required, utilizing frameworks like React or Angular. Maintain clear documentation for code and processes. Key Skills and Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. 3+ years of experience in backend development with Java (Java 8+ preferred). Expertise in frameworks such as Spring Boot and Hibernate. Proficiency with AWS services like EC2, S3, RDS, Lambda, API Gateway, and DynamoDB. Strong understanding of RESTful API design, microservices architecture, and design patterns. Hands-on experience with CI/CD tools (e.g., Jenkins, Git, or similar). Familiarity with containerization tools (Docker) and orchestration systems (Kubernetes) is a plus. Basic understanding or working experience with frontend technologies like React, Angular, or Vue.js. Strong communication and problem-solving skills with a collaborative mindset. Preferred Qualifications: AWS certifications (e.g., AWS Certified Developer – Associate). Experience with Agile methodologies and tools like JIRA. Knowledge of database systems (SQL and NoSQL) and caching mechanisms (Redis, Memcached). Experience in full-stack development is a significant advantage. Why Join Us? People Tech Group has significantly grown over the past two decades, focusing on enterprise applications and IT services. We are headquartered in Bellevue, Washington, with a presence across the USA, Canada, and India. We are also expanding to the EU, ME, and APAC regions. With a strong pipeline of projects and satisfied customers, People Tech has been recognized as a Gold Certified Partner for Microsoft and Oracle. Benefits: L1 Visa opportunities to the USA after 1 year of a proven track record. Competitive wages with private healthcare cover. Incentives for certifications and educational assistance for relevant courses. Support for family with maternity leave. Complimentary daily lunch and participation in employee resource groups. Show more Show less

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview The Software Engineering Manager will play a pivotal role in software development activities and long-term initiative planning and collaboration across the Strategy & Transformation (S&T) organization. Software Engineering is the corner stone of scalable digital transformation across PepsiCo’s value chain. This leader will deliver the end-to-end software development experience, deliver high quality software as part of the DevOps process, and have accountability for our business operations. The leader in this role will be highly experienced Software Engineering Manager and hands-on with Java/Python/Azure technologies to lead the design, development and support of our Integration platform. This role is critical in shaping our integration landscape, establishing development best practices, and mentoring a world-class engineering team. This role will play a key leadership role in a product-focused, high-growth startup/enterprise environment, owning end to end integration services. Responsibilities Support and guide a team of engineers in developing and maintaining Digital Products and Applications (DPA). Oversee the comprehensive development of integration services for the Integration platform utilizing Java and Python on Azure. Design scalable, performant, and secure systems ensuring maintainability and quality. Establish code standards and best practices; conduct code reviews and technical audits. Advise on the selection of tools, libraries, and frameworks. Research emerging technologies and provide recommendations for their adoption. Uphold high standards of Integration services and performance across platforms. Foster partnerships with User Experience, Product Management, IT, Data & Analytics, Emerging Tech, Innovation, and Process Engineering teams to deliver the Digital Products portfolio. Create a roadmap and schedule for implementation based on business requirements and strategy. Demonstrate familiarity with AI tools and platforms such as OpenAI (GPT-3/4, Assistants API), Anthropic, or similar LLM providers. Integrate AI capabilities into applications, including AI copilots and AI agents, smart chatbots, automated data processors, and content generators. Understand prompt engineering, context handling, and AI output refinement. Lead multi-disciplinary, high-performance work teams distributed across remote locations effectively. Build, manage, develop, and mentor a team of engineers. Engage with executives throughout the company to advocate the narrative surrounding software engineering. Expand DPA capabilities through a customer-focused, services-driven digital solutions platform leveraging data and AI to deliver automated and personalized experiences. Manage and appropriately escalate delivery impediments, risks, issues, and changes associated with engineering initiatives to stakeholders. Collaborate with key business partners to recommend solutions that best meet the strategic needs of the business. Qualifications Bachelor's or master's in computer science, engineering, or related field 10-12 years of software design and development (Java, Spring Boot, Python) 8-10 years of Java/Python development, enterprise-grade applications expertise 3-5 years of microservices development and RESTful API design 3-5 years with cloud-native solutions (Azure preferred, AWS, Google Cloud) Strong understanding of web protocols, REST APIs, SOA 3-5 years as lead developer, mentoring teams, driving technical direction Proficient with relational databases (Oracle, MSSQL, MySQL) and NoSQL databases (Couchbase, MongoDB) Exposure to ADF or ADB Experience with Azure Kubernetes Service or equivalent Knowledge of event-driven architecture and message brokers (Kafka, ActiveMQ) Data integration experience across cloud and on-prem systems Deep understanding of CI/CD pipelines, DevOps automation Ability to write high-quality, secure, scalable code Experience delivering mission-critical, high-throughput systems Strong problem-solving, communication, stakeholder collaboration skills Experience in Scaled Agile (SAFe) as technical lead Knowledge of Salesforce ecosystem (Sales Cloud/CRM) is a plus Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLA’s for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction Work with Match360, Publisher, and Watsonx integrations to modernize MDM workloads Drive architectural decisions and ensure alignment with product roadmaps and enterprise standards Secondary: Informatica MDM (Desirable Skillset) Understand Key Concepts Of Informatica MDM Including: Landing, staging, base objects, trust & match rules Hierarchy configuration, E360 views, and SIF/REST API integrations Support data ingestion processes (batch & real-time), transformation, and cleansing routines via IDQ and Java-based user exits Provide insights and inputs to help us strategically position IBM MDM against Informatica, shaping unique assets and accelerators Cross-Functional and Strategic Responsibilities Collaborate with data governance and business teams to implement DQ rules, lineage, and business glossaries Mentor junior developers; participate in design/code reviews and knowledge-sharing sessions Create and maintain documentation: architecture diagrams, integration blueprints, solution specs Stay current with modern MDM practices, AI/ML in data mastering, and cloud-first platforms (e.g., CP4D, IICS, Snowflake, Databricks) Experience with other database platforms and technologies (e.g., DB2,Oracle, SQL Server). Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools. Knowledge of database regulatory compliance requirements (e.g., GDPR, HIPAA). Your Role And Responsibilities We are seeking an experienced and self driven Senior MDM Consultant to design, develop, and maintain enterprise-grade Master Data Management solutions with a primary focus on IBM MDM and foundational knowledge of Informatica MDM. This role will play a key part in advancing our data governance, quality, and integration strategies across customer, product, and party domains. Having experience in IBM DataStage , Knowledge Catalog, Cloud Pak for Data, Manta is important. You will work closely with cross-functional teams including Data Governance, Source System Owners, and Business Data Stewards to implement robust MDM solutions that ensure consistency, accuracy, and trustworthiness of enterprise data. Strong Hands-on Experience With: Informatica MDM 10.x, IDQ, and Java-based user exits. MDM components: base/landing/staging tables, relationships, mappings, hierarchy, E360 Informatica PowerCenter, IICS, or similar ETL tools Experience with REST APIs, SOA, event-based integrations, and SQL/RDBMS. Familiarity with IBM MDM core knowledge in matching, stewardship UI, workflows, and metadata management. Excellent understanding of data architecture, governance, data supply chain, and lifecycle management. Strong communication, documentation, and stakeholder management skills. Experience with cloud MDM/SaaS solutions and DevOps automation for MDM deployments. Knowledge of BAW, Consent Management, Account & Macro Role configuration. Preferred Education Bachelor's Degree Required Technical And Professional Expertise We are seeking an experienced and self driven Senior MDM Consultant to design, develop, and maintain enterprise-grade Master Data Management solutions with a primary focus on IBM MDM and foundational knowledge of Informatica MDM. This role will play a key part in advancing our data governance, quality, and integration strategies across customer, product, and party domains. Having experience in IBM DataStage , Knowledge Catalog, Cloud Pak for Data, Manta is important. You will work closely with cross-functional teams including Data Governance, Source System Owners, and Business Data Stewards to implement robust MDM solutions that ensure consistency, accuracy, and trustworthiness of enterprise data. Strong Hands-on Experience With: Informatica MDM 10.x, IDQ, and Java-based user exits MDM components: base/landing/staging tables, relationships, mappings, hierarchy, E360 Informatica PowerCenter, IICS, or similar ETL tools Experience with REST APIs, SOA, event-based integrations, and SQL/RDBMS. Familiarity with IBM MDM core knowledge in matching, stewardship UI, workflows, and metadata management. Excellent understanding of data architecture, governance, data supply chain, and lifecycle management. Strong communication, documentation, and stakeholder management skills. Experience with cloud MDM/SaaS solutions and DevOps automation for MDM deployments. Knowledge of BAW, Consent Management, Account & Macro Role configuration. Preferred Technical And Professional Experience Other required skills: IBM DataStage , Knowledge Catalog, Cloud Pak for Data, Manta Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Python Full Stack Developer with Azure Cloud and SQL Experience Job Description: As a Python Full Stack Developer, you will be responsible for designing, developing, and maintaining web applications using Python and related technologies. You will work closely with cross-functional teams to deliver high-quality software solutions. The ideal candidate will also have experience with Azure Cloud services and SQL development. Key Responsibilities: Design and develop scalable web applications using Python, Django/Flask, and JavaScript frameworks (e.g., React, Angular, or Vue.js). Collaborate with UI/UX designers to create user-friendly interfaces and enhance user experience. Implement RESTful APIs and integrate with third-party services. Manage and optimize databases using SQL (e.g., PostgreSQL, MySQL, or SQL Server). Deploy and manage applications on Azure Cloud, utilizing services such as Azure App Services, Azure Functions, and Azure SQL Database. Write clean, maintainable, and efficient code following best practices and coding standards. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and resolve application issues, ensuring high availability and performance. Stay updated with emerging technologies and industry trends to continuously improve development processes. Qualifications: Requires 3-5 years minimum prior relevant experience Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Full Stack Developer with a strong focus on Python development. Proficiency in front-end technologies such as HTML, CSS, and JavaScript frameworks (React, Angular, or Vue.js). Experience with back-end frameworks such as Django or Flask. Strong knowledge of SQL and experience with database design and management. Hands-on experience with Azure Cloud services and deployment strategies. Familiarity with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with containerization technologies (e.g., Docker, Kubernetes). Knowledge of DevOps practices and CI/CD pipelines. Familiarity with Agile development methodologies. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Position: Fullstack Developer (AI+React) Experience: 8+ years Work Mode: Remote Shift timings: 8 am-5 pm Notice Period: Immediate Experience with AI/ML: 3+ years Tech Stack React, Next.js, TypeScript, FastAPI, Python, PostgreSQL, MongoDB, GPT-4, LangChain, Terraform, AWS Must-Haves Expert-level proficiency in React, TypeScript, and Next.js, including SSR and SSG. Strong backend experience using Python and FastAPI, with a focus on API design, database modeling (PostgreSQL, MongoDB), and secure authentication protocols (e.g., JWT, OAuth2). Hands-on experience with prompt engineering and deploying large language models (LLMs) such as GPT-4, LLaMA, or open-source equivalents. Familiarity with ML model serving frameworks (e.g., TorchServe, BentoML) and container orchestration tools (Docker, Kubernetes). In-depth knowledge of the AI/ML development lifecycle, from data preprocessing to model monitoring and retraining. Strong understanding of Agile/Scrum, including writing user stories, defining acceptance criteria, and conducting code reviews. Proven ability to translate financial domain requirements (accounting, bookkeeping, reporting) into scalable product features. Experience integrating with financial services APIs (e.g., accounting platforms, payment gateways, banking feeds) is a plus. Excellent written and verbal communication skills in English, with the ability to explain complex ideas to nontechnical audiences. Skills: next.js,api design,container orchestration tools,financial services api integration,agile,scrum,communication skills,postgresql,prompt engineering,terraform,mongodb,gpt-4,ai/ml,react,aws,ml,ml model serving frameworks,database modeling,typescript,python,fastapi,secure authentication techniques,langchain Show more Show less

Posted 1 day ago

Apply

7.0 - 12.0 years

6 - 12 Lacs

Chennai, Mumbai (All Areas)

Hybrid

Naukri logo

if anyone interested who is an immediate joiner or serving notice period is below 15 days, please DONOT CALL ME ,ONLY WHATSAPP ME me your details with resume -9003993690 or send me your resume with the below email id -sushma.v@bct-consulting.com • Knowledge of Python ecosystem. Experience with http rest APIs with focus on Django Experience with Git (version control system). E.g. Gitlab, Gitlab CI Experience in DevOps /OPS Linux operating system experience Experience in containerization (docker, podman) LLM operations Cloud experience (e.g. IBM Cloud / Azure)

Posted 1 day ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

Noida, Hyderabad, Mumbai (All Areas)

Work from Office

Naukri logo

About The Company: ARA's Client is a forward-thinking IT services company, helping businesses accelerate their digital and data journeys. They specialize in Product Engineering, Digital Transformation, and Application Modernizationbuilding solutions that are robust, scalable, and secure. Our teams embed Agile, DevOps, and Quality Engineering into every project, ensuring speed and excellence at scale. With deep expertise in Data Analytics, AI/ML, and Generative AI, we transform raw data into actionable insights. Our work spans web, mobile, and cloud platforms, integrating technologies like NLP and predictive analytics to deliver intelligent, connected experiences. The Role: We are looking for an experienced DevOps Lead to drive the implementation and optimization of our DevOps practices.You will lead a team of engineers, ensuring our software development lifecycle is efficient, reliable, and scalable. Key Responsibilities: Team Leadership: Lead and mentor a team of DevOps engineers, fostering a culture of continuous improvement and collaboration. CI/CD Pipeline Development: Develop and manage CI/CD pipelines for multiple projects, ensuring seamless integration and deployment. Automation: Automate and streamline software development and release processes to enhance efficiency and reduce errors. Collaboration: Collaborate with development, QA, and operations teams to ensure seamless integration and deployment. Monitoring and Performance: Monitor system performance and implement improvements as necessary to ensure high availability and scalability. Security and Compliance: Oversee security and compliance measures in the DevOps processes, ensuring adherence to industry standards. Skills Required: Technical Skills: Proven experience with CI/CD tools like Jenkins, GitLab, or CircleCI. Strong knowledge of cloud platforms such as AWS, Azure, or Google Cloud. Experience with containerization technologies like Docker and Kubernetes. Proficiency in scripting languages such as Python, Bash, or PowerShell. Experience with infrastructure as code tools like Terraform or Ansible Soft Skills: Strong leadership and team management skills. Excellent problem-solving and analytical abilities. Effective communication and collaboration skills. Qualifications & Experience: Education: Bachelors degree in Computer Science, Engineering, or a related field. Experience: 6+ years of experience in a DevOps or related role. Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or Google Professional DevOps Engineer. Experience with Agile methodologies and version control systems like Git.

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Python Full Stack Developer with Azure Cloud and SQL Experience Job Description: As a Python Full Stack Developer, you will be responsible for designing, developing, and maintaining web applications using Python and related technologies. You will work closely with cross-functional teams to deliver high-quality software solutions. The ideal candidate will also have experience with Azure Cloud services and SQL development. Key Responsibilities: Design and develop scalable web applications using Python, Django/Flask, and JavaScript frameworks (e.g., React, Angular, or Vue.js). Collaborate with UI/UX designers to create user-friendly interfaces and enhance user experience. Implement RESTful APIs and integrate with third-party services. Manage and optimize databases using SQL (e.g., PostgreSQL, MySQL, or SQL Server). Deploy and manage applications on Azure Cloud, utilizing services such as Azure App Services, Azure Functions, and Azure SQL Database. Write clean, maintainable, and efficient code following best practices and coding standards. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and resolve application issues, ensuring high availability and performance. Stay updated with emerging technologies and industry trends to continuously improve development processes. Qualifications: Requires 3-5 years minimum prior relevant experience Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Full Stack Developer with a strong focus on Python development. Proficiency in front-end technologies such as HTML, CSS, and JavaScript frameworks (React, Angular, or Vue.js). Experience with back-end frameworks such as Django or Flask. Strong knowledge of SQL and experience with database design and management. Hands-on experience with Azure Cloud services and deployment strategies. Familiarity with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with containerization technologies (e.g., Docker, Kubernetes). Knowledge of DevOps practices and CI/CD pipelines. Familiarity with Agile development methodologies. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: AI Engineer Location: Kochi / Trivandrum Experience: 3-7 Years About The Role We are seeking a talented and experienced AI Engineer to join our growing team and play a pivotal role in the development and deployment of innovative AI solutions. This individual will be a key contributor to our AI transformation, working closely with AI Architects, Data Scientists, and delivery teams to bring cutting-edge AI concepts to life. Key Responsibilities Model Development & Implementation: Design, develop, and implement machine learning models and AI algorithms, from initial prototyping to production deployment. Data Engineering: Work with large and complex datasets, performing data cleaning, feature engineering, and data pipeline development to prepare data for AI model training. Solution Integration Integrate AI models and solutions into existing enterprise systems and applications, ensuring seamless functionality and performance. Model Optimization & Performance Optimize AI models for performance, scalability, and efficiency, and monitor their effectiveness in production environments. Collaboration & Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, and software engineers, to understand requirements and deliver impactful AI solutions. Code Quality & Best Practices Write clean, maintainable, and well-documented code, adhering to best practices for software development and MLOps. Research & Evaluation Stay updated with the latest advancements in AI/ML research and technologies, evaluating their potential application to business challenges. Troubleshooting & Support Provide technical support and troubleshooting for deployed AI systems, identifying and resolving issues promptly. Key Requirements 3-7 years of experience in developing and deploying AI/ML solutions. Strong programming skills in Python (or similar languages) with extensive experience in AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Solid understanding of machine learning algorithms, deep learning concepts, and statistical modelling. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Experience with version control systems (e.g., Git) and collaborative development workflows. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Bachelor’s or master’s degree in computer science, Engineering, Data Science, or a related field. Good To Have Experience with MLOps practices and tools (e.g., MLflow, Kubeflow). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with big data technologies (e.g., Spark, Hadoop). Prior experience in an IT services or product development environment. Knowledge of specific AI domains such as NLP, computer vision, or time series analysis. Key Skills Machine Learning, Deep Learning, Python, TensorFlow, PyTorch, Data Preprocessing, Model Deployment, MLOps, Cloud AI Services, Software Development, Problem-solving. Skills Machine Learning,Data Science,Artificial Intelligence Show more Show less

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Key Responsibilities Develop scalable, secure, and high-performance web applications using Java on the back end and modern front-end frameworks. Work closely with product owners, UI/UX designers, and other developers to implement user-friendly features and functionality. Design and implement RESTful APIs and integrate third-party services. Leverage GCP services (e.g., App Engine, Cloud Functions, Cloud Run, Pub/Sub, Cloud Storage) for cloud-native architecture. Build and maintain reusable code and libraries for future use. Write unit, integration, and end-to-end tests to ensure code quality. Participate in code reviews and agile development ceremonies (sprint planning, retrospectives, etc.). Implement CI/CD pipelines using GCP tools (Cloud Build, Cloud Deploy) or other tools like Jenkins, GitHub Actions, etc. Collaborate with DevOps teams to deploy applications to cloud platforms. ________________________________________ Required Skills & Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. 5-8 years of experience as a full stack developer. Strong back-end experience in Java (Java 8 or higher), Spring Boot, Hibernate/JPA. Proficient in front-end technologies such as HTML5, CSS3, JavaScript. Hands-on experience with modern JavaScript frameworks (React.js, Angular, or Vue.js). Knowledge of RESTful APIs and Microservices architecture. Experience with relational databases like MySQL, PostgreSQL, or Oracle. Familiar with build tools and version control (Maven/Gradle, Git). Experience in building CI/CD pipelines and deployment practices. Experience in unit testing frameworks (JUnit, Mockito, Jasmine, etc.). Experience with Kubernetes (GKE preferred). Experience with serverless computing and microservices architecture. GCP certification (Associate Cloud Engineer, Professional Cloud Developer, etc.) desirable Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. 8+ years of hands-on experience in JAVA FULL STACK - JAVA SPRING BOOT Java 11+, Spring Boot, Angular/React, REST APIs, Docker, Kubernetes, Microservices Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Bhopal, Madhya Pradesh, India

On-site

Linkedin logo

Company Description Brandsmashers is a dynamic company driving digital transformation across e-commerce, agriculture, healthcare, and education sectors. We specialize in technologies such as React/Next.js, Vue.js, Node.js, Java, React Native, and AWS to deliver innovative and efficient solutions. Our expertise helps enhance online presence, optimize operations, and develop cutting-edge mobile applications tailored to industry-specific challenges. We prioritize understanding your business objectives to deliver measurable results and long-term success, staying at the forefront of technology trends to maintain a competitive edge. Role Description This is a full-time on-site role for a Golang Developer, located in Bhopal. The Golang Developer will be responsible for designing, developing, and maintaining high-performance APIs and web services. Daily tasks include writing clean, scalable code, debugging and resolving technical issues, and collaborating with cross-functional teams to deliver high-quality products. The role involves performance tuning, continuous integration and deployment, and contributing to project planning and documentation. Qualifications Proficiency in Golang, with solid understanding of its syntax and features Experience in developing RESTful APIs and microservices architecture Familiarity with front-end technologies such as React.js, Vue.js Knowledge of AWS Cloud Services, Docker, and Kubernetes Understanding of SQL/NoSQL databases Experience with code versioning tools, such as Git Strong problem-solving skills and ability to work in a collaborative environment Bachelor's degree in Computer Science, Engineering, or a related field Experience in an Agile development environment is a plus Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking an experienced DevOps Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining secure cloud infrastructure using cloud-based technologies, including Oracle and Microsoft platforms. You will build and support scalable and reliable application systems and automate deployments. Additionally, you will integrate various systems and technologies using REST APIs and automate the software development and deployment lifecycle. Leveraging automation and monitoring tools, along with AI-powered solutions, you will ensure the smooth operation of our cloud-based systems. Key Areas of Responsibility Implement automation to control and orchestrate cloud workloads, managing the build and deployment cycles for each deployed solution via CI/CD. Utilize a wide variety of cloud-based services, including containers, App Services, API , and SaaS-oriented integration. GitHub and CI/CD tools (e.g., Jenkins, GitHub Actions, Maven/ANT). Create and maintain build and deployment configurations using Helm and Yaml. Manage the software change control process, including Quality Control and SCM audits, enforcing adherence to all change control and code management processes. Continuously manage and maintain releases, clear understanding of release management process Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud-based solutions. Problem-solving, teamwork, and communication to emphasize the collaborative nature of the role. Perform builds and environment configurations. Required Skills and Experience 5+ years of overall experience, Expertise in automating the software development and deployment lifecycle using Jenkins, Github Actions, SAST, DAST, Compliances, and Oracle ERP DevOps tools. Proficient with Unix Shell Scripting, SQL*Plus, PL/SQL, and Oracle database objects. Understanding of branching models is important. Experience in creating cloud resources using automation tools. Strong hands-on experience with Terraform and Azure Infrastructure as Code (IaC). Hands-on experience in GitOps, Flux CD/Argo CD, Jenkins, Groovy. Building and deploying Java and .NET applications, Liquibase database deployments. Proficient with Azure cloud concepts, creating Azure Container Apps, Kubernetes, Load balancers, Az CLI, Kubectl, Observability, APM, App Performance reivews. Azure AZ-104 or AZ-400 Certification is a plus Show more Show less

Posted 1 day ago

Apply

Exploring Kubernetes Jobs in India

Kubernetes is a popular open-source platform that automates the deployment, scaling, and management of containerized applications. In recent years, the demand for Kubernetes professionals in India has been on the rise as more and more companies adopt containerization and cloud-native technologies. Job seekers in India looking to advance their careers in the field of Kubernetes have a plethora of opportunities awaiting them.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Kubernetes professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced Kubernetes experts with 5+ years of experience can earn upwards of INR 15-20 lakhs per annum.

Career Path

A typical career path in Kubernetes roles may include: - Junior Kubernetes Engineer - Kubernetes Developer - Senior Kubernetes Engineer - Kubernetes Architect - Kubernetes Lead

Related Skills

In addition to Kubernetes expertise, professionals in this field are often expected to have knowledge and experience in the following areas: - Docker - Cloud computing (AWS, GCP, Azure) - Linux administration - Container orchestration - CI/CD pipelines

Interview Questions

  • What is Kubernetes and how does it differ from Docker? (basic)
  • Explain the difference between a pod and a container in Kubernetes. (basic)
  • How do you handle network communication between pods in Kubernetes? (medium)
  • What is a Kubernetes deployment and how does it work? (medium)
  • How do you troubleshoot a Kubernetes cluster that is not scheduling pods properly? (advanced)
  • Explain the concept of Persistent Volumes in Kubernetes. (medium)
  • How do you scale a deployment in Kubernetes? (basic)
  • What is a Kubernetes Operator and how does it work? (advanced)
  • How do you secure a Kubernetes cluster? (medium)
  • Describe the role of etcd in a Kubernetes cluster. (advanced)
  • What is Ingress in Kubernetes and how is it different from a Service? (medium)
  • How do you monitor the performance of a Kubernetes cluster? (medium)
  • Explain the concept of Kubernetes namespaces. (basic)
  • How do you upgrade a Kubernetes cluster to a newer version? (medium)
  • What are the benefits of using Helm in Kubernetes? (medium)
  • How does Kubernetes handle storage? (advanced)
  • Describe the differences between StatefulSets and Deployments in Kubernetes. (medium)
  • What is a Kubernetes ConfigMap and how is it used? (basic)
  • How do you handle rolling updates in Kubernetes? (medium)
  • Explain the concept of Kubernetes labels and selectors. (basic)
  • How do you manage secrets in Kubernetes? (medium)
  • Describe the role of kube-proxy in a Kubernetes cluster. (advanced)
  • How does Kubernetes handle load balancing? (medium)
  • What is a Kubernetes controller and how does it work? (medium)
  • How do you perform a backup and restore operation in Kubernetes? (advanced)

Closing Remark

As the demand for Kubernetes professionals continues to grow in India, now is the perfect time to upskill and prepare for exciting career opportunities in this field. By mastering Kubernetes and related technologies, you can position yourself as a valuable asset in the job market and secure rewarding roles with top companies. So, brush up on your skills, ace those interviews, and embark on a successful career journey in Kubernetes!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies