Jobs
Interviews

35919 Kubernetes Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Thiruvananthapuram

On-site

What you’ll do Design, develop, test, deploy, maintain, and improve high scale cloud native applications across the full engineering stack using Java, Spring Boot, Angular etc Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Mentor high performing software engineers to achieve business goals while reporting into a senior tech lead. Manage project priorities, deadlines, and deliverables. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity. Cloud Certification Strongly Preferred What experience you need Bachelor's degree or equivalent experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, Spring Boot. 5+ years experience designing and developing microservices using Java, Spring Boot, GCP SDKs/Any cloud sdk, GKE/Kubernetes 5+ years experience designing and developing cloud-native solutions 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs 1+ experience leading an engineering team What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Awareness of latest technologies and trends Good knowledge on software configuration management systems. UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+)

Posted 1 day ago

Apply

8.0 years

0 Lacs

Haryana, India

On-site

Job Title: Senior Cloud Engineer - AWS Experience: 8+ Years Location: Gurgaon/HyderabadCompany Name : Incedo Technology Job Summary: We are seeking an experienced Senior Cloud Engineer with over 10 years of experience, specializing in AWS architecture and cloud migration . The ideal candidate will have a strong background in designing and implementing scalable, secure, and cost-effective cloud solutions using various AWS services. Expertise in Terraform , EFS , API Gateway , and EKS is essential. Key Responsibilities: Lead the design and implementation of AWS-based cloud architecture for enterprise applications.Drive cloud migration initiatives from on-premise to AWS, ensuring minimal downtime and risk.Architect and implement infrastructure as code using Terraform .Design and configure Amazon EKS , API Gateway , and EFS based solutions to meet application requirements.Evaluate and integrate various AWS native services to enhance scalability, performance, and security.Collaborate with DevOps, security, and application teams to ensure robust cloud architecture.Provide technical leadership and mentorship to junior architects and engineers.Create and maintain architectural documentation and best practice guidelines. Required Skills & Experience: 10+ years of overall IT experience with at least 4-5 years in AWS cloud architecture.Deep understanding of AWS services including EC2, S3, IAM, VPC, Lambda, CloudWatch, and RDS.Strong expertise in: Cloud migration strategies and execution Infrastructure as Code (IaC) using Terraform Amazon EKS (Elastic Kubernetes Service) AWS API Gateway integration and management Amazon EFS (Elastic File System)Proven experience in designing high-availability and disaster recovery strategies in AWS.Hands-on experience with CI/CD pipelines and DevOps practices is a plus

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description As a Data Scientist, you will play a key role in designing, building, and deploying scalable machine learning solutions, with a focus on real-world applications including Generative AI, optimization, forecasting, and operational analytics. You will work closely with data scientists, engineers, and business stakeholders to take AI models from ideation to production, ensuring high-quality delivery and integration within ReNew’s technology ecosystem. Roles and Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Eligibility Criteria Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams Main Interfaces The role will involve close collaboration with data scientists, data engineers, business stakeholders, platform teams, and solution architects. Job Description As a Data Scientist, you will play a key role in designing, building, and deploying scalable machine learning solutions, with a focus on real-world applications including Generative AI, optimization, forecasting, and operational analytics. You will work closely with data scientists, engineers, and business stakeholders to take AI models from ideation to production, ensuring high-quality delivery and integration within ReNew’s technology ecosystem. Roles and Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Eligibility Criteria Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams Main Interfaces The role will involve close collaboration with data scientists, data engineers, business stakeholders, platform teams, and solution architects.

Posted 1 day ago

Apply

5.0 - 8.0 years

6 - 7 Lacs

Gurgaon

On-site

Gurgaon 1 5 to 8 years Full Time We are looking for a highly skilled DevOps Engineer to support and optimize the deployment, scalability, and reliability of our cloud-based microservices architecture. The ideal candidate will have hands-on experience with containerized deployments, Kubernetes, monitoring tools, JVM tuning, and performance optimization. Responsibilities / Scope of Work: Docker Deployment & Optimization Create Dockerfiles and multi-stage builds for Java, Node, and Angular apps. Set up Docker Compose for local development. Optimize image size and performance. System Design for Microservices Design infrastructure architecture for microservices-based applications. Define service communication patterns (sync/async, service mesh, etc.). Ensure fault-tolerance and high availability. Kubernetes (K8s) Deployment Configure deployments, StatefulSets, Services, and Ingress. Implement horizontal/vertical pod autoscaling . Set up Helm charts for reusable deployment patterns. Load Balancing and API Gateway Configure NGINX and Spring Cloud Gateway for internal and external traffic routing. Implement rate limiting, circuit breakers, and routing policies. NGINX Web Server Configuration Set up web servers for micro frontends and static content. Configure SSL, gzip compression, and custom headers. Auto-scaling and Resource Optimization Configure auto-scaling policies based on CPU, memory , or custom metrics. Analyze and tune resource limits/requests for performance and cost. Monitoring & Alerting Integrate Prometheus and Grafana to monitor: JVM heap usage GC activity CPU, memory, and network metrics Application-level metrics (via Micrometre or custom exporters) Set up alerts (e.g., Slack, Email, and Pager Duty). JVM Performance Tuning Analyze GC logs and memory usage in containerized environments. Optimize JVM options (-Xms, -Xmx, GC, etc.). Integrate VisualVM or JFR in CI/CD diagnostics.

Posted 1 day ago

Apply

10.0 years

6 - 9 Lacs

Gurgaon

On-site

Company Description We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Strong experience in AWS (API Gateway, Fargate, S3, DynamoDB, SNS). Strong experience in SOAP and PostgreSQL. Hands-on experience with REST APIs, Caching system (e.g Redis) and messaging systems like Kafka etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Enthusiasm for learning new technologies and staying updated on industry trends RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

12.0 years

1 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities: Manage and mentor a team of data engineers, fostering a culture of innovation and continuous improvement Design and maintain robust data architectures, including databases and data warehouses Oversee the development and optimization of data pipelines for efficient data processing Implement measures to ensure data integrity, including validation, cleansing, and governance practices Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions Analyze, synthesize, and interpret data from a variety of data sources, investigating, reconciling and explaining data differences to understand the complete data lifecycle Architecting with modern technology stack and Designing Public Cloud Application leveraging in Azure Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience. 12+ Years of Implementation experience on time-critical production projects following key software development practices 8+ years of programming experience in Python or any programming language 6+ years of hands-on programming experience in Spark using scala/python 4+ years of hands-on working experience with Azure services like: Azure Databricks Azure Data Factory Azure Functions Azure App Service Good knowledge in writing SQL queries Good knowledge in building REST API's Good knowledge on tools like Azure Dev Ops & Github Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Ability to learn modern technologies and be part of fast paced teams Proven excellent Analytical and Communication skills (Both Verbal and Written) Proficiency with AI-powered development tools such as GitHub Copilot or AWS Code Whisperer or Google’s Codey (Duet AI) or any relevant tools is expected. Candidates should be adept at integrating these tools into their workflows to accelerate development, improve code quality, and enhance delivery velocity. Expected to proactively leverage AI tools throughout the software development lifecycle to drive faster iteration, reduce manual effort, and boost overall engineering productivity Preferred Qualification: Good knowledge on Docker & Kubernetes services At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon

On-site

MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. MongoDB is growing rapidly and seeking a Software Engineer for the Data Platform team to be a key contributor to the overall internal data platform at MongoDB. The Data Platform team focuses on building reliable, flexible, and high quality data infrastructure such as a streaming platform, ML platform, and experimentation platform to enable all of MongoDB to utilize the power of data. As a Software Engineer, you will design and build a scalable data platform to help drive MongoDB's growth as a product and as a company, while also lending your technical expertise to other engineers as a mentor and trainer. You will tackle complex platform problems with the goal of making our platform more scalable, reliable, and robust. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Who you are You have worked on production-grade building applications and are capable of making backend improvements in languages such as Python and Go You have experience building tools and platforms for business users and software developers You have experience with or working knowledge of cloud platforms and services You enjoy working on full stack applications including UI/UX, API Design, databases and more You're looking for a high impact and growth role with variety of opportunities to drive adoption of data via services and tooling You're passionate about developing reliable and high quality software You're curious, collaborative and intellectually honest You're a great team player What you will do Design and build UI, API and other data platforms services for data users including but not limited to analysts, data scientists and software engineers Work closely with product design teams to make improvements to the internal data platform services with an emphasis on UI/UX Perform code reviews with peers and make recommendations on how to improve our code and software development processes Design boilerplate architecture that can abstract underlying data infrastructure from end users Further improve the team's testing and development processes Document and educate the larger team on best practices Help drive optimization, testing, and tooling to improve data platform quality Collaborate with other software engineers, machine learning experts, and stakeholders, taking learning and leadership opportunities that will arise every single day Bonus Points You have experience with modern Javascript environment including frontend frameworks such as React and Typescript You are familiar with data infrastructure and toolings such as Presto, Hive, Spark and BigQuery You are familiar with deployment and configuration tools such as Kubernetes, Drone, and Terraform You are interested in web design and have experience directly working with product designers You have experience designing and building microservices You have experience building a machine learning platform using tools like SparkML, Tensorflow, Seldon Core, etc. Success Measures In three months you will have familiarized yourself with much of our data platform services, be making regular contributions to our codebase, will be collaborating regularly with stakeholders to widen your knowledge, and helping to resolve incidents and respond to user requests. In six months you will have successfully investigated, scoped, executed, and documented a small to medium sized project and worked with stakeholders to make sure their user experiences are vastly enhanced by implementing improvements to our platform services. In a year you will have become the key person for several projects within the team and will have contributed to not only the data platform's roadmap but MongoDB's data-driven journey. You will have made several sizable contributions to the project and are regularly looking to improve the overall stability and scalability of the architecture. To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Requisition ID 2263175955

Posted 1 day ago

Apply

7.0 - 9.0 years

0 Lacs

New Delhi, Delhi, India

On-site

The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 day ago

Apply

5.0 years

9 - 13 Lacs

Gurgaon

On-site

Job Title: Senior Software Engineer – AI/ML (Tech Lead) Experience: 5+ Years Location: Gurugram Notice Period: Immediate Joiners Only Roles & Responsibilities Design, develop, and deploy robust, scalable AI/ML-driven products and features across diverse business verticals. Provide technical leadership and mentorship to a team of engineers, ensuring delivery excellence and skill development. Drive end-to-end execution of projects — from architecture and coding to testing, deployment, and post-release support. Collaborate cross-functionally with Product, Data, and Design teams to align technology efforts with product strategy. Build and maintain ML infrastructure and model pipelines , ensuring performance, versioning, and reproducibility. Lead and manage engineering operations — including monitoring, incident response, logging, performance tuning, and uptime SLAs. Take ownership of CI/CD pipelines , DevOps processes, and release cycles to support rapid, reliable deployments. Conduct code reviews , enforce engineering best practices, and manage team deliverables and timelines. Proactively identify bottlenecks or gaps in engineering or operations and implement process improvements. Stay current with trends in AI/ML, cloud technologies, and MLOps to continuously elevate team capabilities and product quality. Tools & Platforms Languages & Frameworks: Python, FastAPI, PyTorch, TensorFlow, Hugging Face Transformers MLOps & Infrastructure: MLflow, DVC, Airflow, Docker, Kubernetes, Terraform, AWS/GCP CI/CD & DevOps: GitHub, GitLab CI/CD, Jenkins Monitoring & Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Sentry Project & Team Management: Jira, Notion, Confluence Analytics: Mixpanel, Google Analytics Collaboration & Prototyping: Slack, Figma, Miro Job Type: Full-time Pay: ₹900,000.00 - ₹1,300,000.00 per year Application Question(s): Total years of experience you have in developing AI/ML based tools ? Total years of experience you have in developing AI/ML projects ? Total years of experience you have in Handling team ? Current CTC? Expected CTC? In how many days you can join us if gets shortlisted? Current Location ? Are you ok to work from office (Gurugram , sector 54)? Rate your English communication skills out of 10 (1 is lowest and 10 is highest)? Please mention your all tech skills which makes you a fit for this role ? Have you gone through the JD and ok to perform all roles and responsibilities ? Work Location: In person

Posted 1 day ago

Apply

3.0 years

20 - 25 Lacs

Gurgaon

Remote

About Us: Sun King (Greenlight Planet) is a multinational, for-profit business that designs, distributes, and finances solar-powered home energy products, with an underserved population in mind: the 1.8 billion global consumers for whom the old-fashioned electrical grid is either unavailable or too expensive. Over a decade in business, the company is now a leading global brand in emerging markets across Asia and Sub-Saharan Africa. Greenlight’s Sun King™ products provide modern light and energy to 32 million people in more than 60 countries and have sold over 8 million products worldwide. From the company’s wide range of trusted Sun King™ solar lamps and home energy systems, to its innovative distribution partnerships, to its EasyBuy™ pay-as-you-go consumer financing model, Greenlight Planet continuously strives to meet the evolving needs of the off-grid market. Greenlight stays in touch with underserved consumers’ needs in part by operating its own direct- to-consumer sales network, including thousands of trusted sales agents (called as “Sun King Energy Officers”) in local communities across local communities. For Sun King Energy Officers, this is not only a good source of income and employment but also they become an important member of their community bring light and catering to local energy needs within their communities. Today, with over 2700+ full-time employees in 15 countries, we remain continuously impressed at how each new team member contributes unique and innovative solutions to the global off-grid challenge, from new product designs, to innovative sales and distribution strategies, to setting up better collection mechanisms, to better training strategies, to more efficient logistical and after- sales service systems. We listen closely to each other to improve our products, our service, and ultimately, the lives of underserved consumers. Job location: Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Benefits: Cell phone reimbursement Flexible schedule Health insurance Internet reimbursement Provident Fund Work from home Application Question(s): What's your expected CTC? What's your notice period? What's your current CTC? Experience: AWS: 3 years (Required) Linux: 3 years (Required) Python: 2 years (Required) Work Location: In person

Posted 1 day ago

Apply

7.0 years

3 - 6 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. System Administrator - AI/ML Platform: We are looking for a detail-oriented and technically proficient AI/ML Cloud Platform Administrator to manage, monitor, and secure our cloud-based platforms supporting machine learning and data science workloads. This role requires deep familiarity with both AWS and Azure cloud services, and strong experience in platform configuration, resource provisioning, access management, and operational automation. You will work closely with data scientists, MLOps engineers, and cloud security teams to ensure high availability, compliance, and performance of our AI/ML platforms. Your responsibilities will include: Provision, configure, and maintain ML infrastructure on AWS (e.g., SageMaker, Bedrock, EKS, EC2, S3) and Azure (e.g., Azure Foundry, Azure ML, AKS, ADF, Blob Storage) Manage cloud resources (VMs, containers, networking, storage) to support distributed ML workflows Deploy and Manage the open source orchestration ML Frameworks such as LangChain and LangGraph Implement RBAC, IAM policies, Azure AD, and Key Vault configurations to manage secure access. Monitor security events, handle vulnerabilities, and ensure data encryption and compliance (e.g., ISO, HIPAA, GDPR) Monitor and optimize performance of ML services, containers, and jobs Set up observability stacks using Fiddler , CloudWatch, Azure Monitor, Grafana, Prometheus, or ELK . Manage and troubleshoot issues related to container orchestration (Docker, Kubernetes – EKS/AKS) Use Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Bicep to automate environment provisioning Collaborate with MLOps teams to automate deployment pipelines and model operationalization Implement lifecycle policies, quotas, and data backups for storage optimization Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in cloud administration, with 2+ years supporting AI/ML or data platforms Proven hands-on experience with both AWS or Azure Proficient in Terraform, Docker, Kubernetes (AKS/EKS), Git, Python or Bash scripting Security Practices: IAM, RBAC, encryption standards, VPC/network setup Requisition ID: 611331 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 day ago

Apply

0.0 - 5.0 years

0 - 1 Lacs

Saibaba Colony, Coimbatore, Tamil Nadu

On-site

Job Title: Senior Linux Administrator / DevOps Engineer Location: [On-site / Coimbatore] Department: IT / Infrastructure Reports To: Infrastructure Manager Job Summary: We are seeking an experienced Senior Linux Administrator / DevOps Engineer to lead the administration, optimization, and automation of our Linux-based infrastructure and DevOps processes. The ideal candidate has deep expertise in Linux systems, scripting, automation, CI/CD pipelines, containerization, and cloud platforms. You will play a critical role in maintaining system performance, ensuring high availability, enhancing security, and enabling rapid development cycles. Key Responsibilities: Manage, monitor, and troubleshoot Linux servers (Red Hat, Ubuntu, CentOS, etc.) across production and development environments. Design, build, and maintain scalable DevOps pipelines (CI/CD) to support agile software development and deployment. Automate system administration tasks using scripting languages (Bash, Python, etc.). Implement and manage containerization technologies (Docker, Kubernetes). Administer and optimize configuration management tools (Ansible, Puppet, Chef, etc.). Collaborate with development teams to improve build, release, and deployment processes. Support and manage cloud environments (AWS, Azure, GCP) including networking, compute, and storage services. Ensure security hardening, compliance, and regular patching of Linux systems. Monitor system health, performance tuning, and proactive incident response. Maintain backup, disaster recovery, and business continuity procedures. Provide mentorship and technical guidance to junior system administrators and DevOps team members. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field, or equivalent professional experience. 5+ years of professional experience as a Linux Systems Administrator. 3+ years working in DevOps roles with experience in building CI/CD pipelines. Strong knowledge of Linux internals, file systems, and networking fundamentals. Expertise in infrastructure-as-code tools (Terraform, CloudFormation). Proficient with Git version control systems. Hands-on experience with container orchestration (Kubernetes, OpenShift, etc.). Experience with monitoring/logging tools (Prometheus, Grafana, ELK Stack, etc.). Strong scripting abilities (Bash, Python, Perl, or similar). Working knowledge of cloud services (AWS, Azure, GCP). Good understanding of security best practices in Linux and cloud environments. Ability to troubleshoot complex issues quickly and effectively. Preferred Qualifications: Relevant certifications (RHCE, CKA, AWS Solutions Architect, etc.). Experience with microservices architecture. Knowledge of databases like MySQL, PostgreSQL, MongoDB. Experience working with Agile and DevOps methodologies. Familiarity with service mesh technologies (Istio, Linkerd) and serverless computing. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and handle multiple priorities. Passion for continuous learning and adopting new technologies. Job Types: Full-time, Permanent Pay: ₹60,000.00 - ₹100,000.00 per month Benefits: Health insurance Life insurance Provident Fund Ability to commute/relocate: Saibaba Colony, Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): This is 6 day work week Job , are you interested ? Experience: Linux: 5 years (Preferred) Work Location: In person

Posted 1 day ago

Apply

6.0 years

1 - 1 Lacs

Delhi

Remote

Position- Senior Developer Analyst Budget- 1.5 Experience- 6+ years Salary range : 1,00,000-1,20,000 Remote About the Role: We are seeking a highly skilled Senior Developer Analyst with strong experience in payment processing systems to join our growing technology team. This role involves both technical development and analytical responsibilities, focusing on designing, implementing, and maintaining secure, high-performance payment solutions. You will collaborate with product managers, architects, QA, and other developers to deliver mission-critical features for payment gateways, merchant services, and transaction processing engines. Key Responsibilities: Analyze business requirements related to payment processing and translate them into technical designs and development plans Design and develop scalable, secure, and high-performance payment modules and integrations Integrate with third-party payment processors (e.g., Stripe, PayPal, Adyen, Worldpay, Razorpay, etc.) Ensure compliance with PCI-DSS and other regulatory requirements in all payment-related workflows Monitor, debug, and improve existing payment flows and handle exception/error scenarios effectively Collaborate with cross-functional teams including Product, QA, DevOps, and Support Write clean, maintainable, and testable code with appropriate documentation Conduct code reviews and mentor junior team members Analyze payment transaction data to identify trends, issues, and opportunities for improvement Support and optimize recurring billing, refunds, chargebacks, and fraud detection processes Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field 5+ years of experience in software development, with at least 3 years in payment processing systems Proficiency in programming languages such as Java, Python, Node.js, or C# Strong understanding of REST APIs, webhooks, and message queues Experience with relational databases (e.g., PostgreSQL, MySQL) and/or NoSQL databases Familiarity with PCI-DSS compliance, tokenization, and data encryption best practices Deep understanding of payment lifecycle, transaction statuses, settlement, and reconciliation Experience integrating with payment gateways, acquirers, or processors Strong debugging and problem-solving skills Excellent communication and analytical thinking Preferred Qualifications: Experience with fraud prevention tools (e.g., Riskified, Sift, etc.) Knowledge of digital wallets, UPI, BNPL, and international payment protocols Familiarity with microservices architecture and cloud environments (e.g., AWS, Azure, GCP) Exposure to DevOps tools (Docker, Kubernetes, CI/CD pipelines) Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹120,000.00 per year Work Location: In person

Posted 1 day ago

Apply

0 years

6 - 12 Lacs

Delhi

Remote

Position- SRE Developer Exp- 10 +yrs Location- Remote Budget- 1.20 LPM Salary range : 1,00,000 INR Duration - 6 months ( C2C) JD: Technical Skills: Programming: Proficiency in languages like Python, Bash, or Java is essential. Operating Systems: Deep understanding of Linux/Windows operating systems and networking concepts. Cloud Technologies: Experience with AWS & Azure including services, architecture, and best practices. Containerization and Orchestration: Hands-on experience with Docker, Kubernetes, and related tools. Infrastructure as Code (IaC): Familiarity with tools like Terraform, CloudFormation or Azure CLI. Monitoring and Observability: Experience with tools like Splunk, New Relic or Azure Monitoring. CI/CD: Experience with continuous integration and continuous delivery pipelines, GitHub, GitHub Actions. Knowledge in supporting Azure ML, Databricks and other related SAAS tools. Preferred Qualifications: Experience with specific cloud platforms (AWS, Azure). Certifications related to cloud engineering or DevOps. Experience with microservices architecture including supporting AI/ML solutions. Experience with large-scale system management and configuration. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Work Location: In person

Posted 1 day ago

Apply

0 years

8 - 10 Lacs

Delhi Cantonment

Remote

ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE TEAM Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business and enablement areas (e.g. Payments Services, Admin Services, Ongoing Monitoring, etc.). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide's Global One Platform. We work in small autonomous teams, grouped under common domains owning the full lifecycle of products and microservices in Tide's service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. ABOUT THE ROLE As a Full Stack Engineer at Tide, you will be a key contributor to our engineering teams, working on designing, creating, and running the rich product catalogue across our business and enablement areas. You will have the opportunity to make a real difference by taking ownership of engineering practices and contributing to our event-driven Microservice Architecture, which currently consists of over 200 services owned by more than 40 teams. Design, build, run, and scale the services your team owns globally. You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Work on both new and existing products, tackling interesting and complex problems. Collaborate closely with Product Owners to translate user needs, business opportunities, and regulatory requirements into well-engineered solutions. Define and maintain the services your team owns, exposing and consuming RESTful APIs with a focus on good API design. Learn and share knowledge with fellow engineers, as we believe in experimentation and collaborative learning for career growth. Have the opportunity to join our Backend and Web Community of Practices, where your input on improving processes and maintaining high quality will be valued. WHAT ARE WE LOOKING FOR A sound knowledge of a backend framework such as Spring/Spring Boot, with experience in writing microservices that expose and consume RESTful APIs. While Java experience is not mandatory, a willingness to learn is essential as most of our services are written in Java. Experience in engineering scalable and reliable solutions in a cloud-native environment, with a strong understanding of CI/CD fundamentals and practical Agile methodologies. Have some experience in web development, with a proven track record of building server-side applications, and detailed knowledge of the relevant programming languages for your stack. Strong knowledge of Semantic HTML, CSS3, and JavaScript (ES6). Solid experience with Angular 2+, RxJS, and NgRx. A passion for building great products in small, autonomous, agile teams. Experience building sleek, high-performance user interfaces and complex web applications that have been successfully shipped to customers. A mindset of delivering secure, well-tested, and well-documented software that integrates with various third-party providers. Solid experience using testing tools such as Jest, Cypress, or similar. A passion for automation tests and experience writing testable code. OUR TECH STACK Java 17 , Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Angular 15+ (including NgRx and Angular Material) Nrwl Nx to manage them as mono repo Storybook as live components documentation Node.js, NestJs and PostgreSQL to power up the BFF middleware Contentful to provide some dynamic content to the apps Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana, Semgrep, LaunchDarkly, and Segment to help us safely track, monitor and deploy GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines WHAT YOU WILL GET IN RETURN Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their home country. Additionally, you can work from a different country for up to 90 days a year. Plus, you'll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves TIDEAN WAYS OF WORKING At Tide, we're Member First and Data Driven, but above all, we're One Team. Our Working Out of Office (WOO) policy allows you to work from anywhere in the world for up to 90 days a year. We are remote first, but when you do want to meet new people, collaborate with your team or simply hang out with your colleagues, our offices are always available and equipped to the highest standard. We offer flexible working hours and trust our employees to do their work well, at times that suit them and their team. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity status or disability status. We believe it's what makes us awesome at solving problems! We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. Tide Website: https://www.tide.co/en-in/ Tide LinkedIn: https://www.linkedin.com/company/tide-banking/mycompany/ TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Company : Mindware Infotech (A Startup Venture from Mindware) Location : Dwarka, New Delhi Job Type : Full-time, On-site Working Hours : 10:00 AM - 7:00 PM (Monday to Saturday) Work Model : On-site only (No Work from Home) About Us Mindware Infotech, a dynamic startup venture backed by Mindware, a trusted name in barcode, RFID, and web solutions for over two decades, is building innovative software and cloud solutions. We are seeking passionate DevOps Developers, including freshers, with expertise in cloud hosting to join our talented, diverse team in Dwarka, New Delhi. Our mission is to deliver cutting-edge Point of Sales, Job Portals, Dating Apps, Warehouse Management, and RFID/IoT solutions. If you are driven by a passion for cloud infrastructure and thrive in a collaborative, fast-paced environment, we invite you to contribute to our vision. Job Summary We are looking for a motivated DevOps Developer with a strong interest in cloud hosting on DigitalOcean and AWS to design, implement, and manage robust cloud infrastructure. This role is critical to ensuring the scalability, security, and performance of our applications. The ideal candidate, including freshers, will have a solid understanding of DevOps practices, cloud architecture, automation, and debugging, with an eagerness to learn and contribute to managing hosting control panels and executing cloud deployments. Key Responsibilities Assist in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure on DigitalOcean and AWS. Support automation of infrastructure provisioning, configuration, and deployment processes using tools like Terraform, Ansible, or similar. Contribute to building and maintaining CI/CD pipelines to streamline application deployment and updates. Participate in optimizing cloud environments for performance, cost-efficiency, and reliability, including load balancing, auto-scaling, and monitoring. Assist in migrations of applications and websites from legacy systems to modern cloud platforms (DigitalOcean/AWS). Monitor and maintain cloud infrastructure, ensuring uptime, security, and compliance with best practices. Debug and resolve infrastructure, deployment, and application issues with guidance from senior team members. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Manage hosting control panels and server configurations to support web applications and databases. Stay updated on emerging cloud technologies and contribute ideas for improving existing infrastructure. Qualifications Mandatory Prerequisite : Proven knowledge and hands-on experience with DigitalOcean or Amazon Web Services (AWS) for hosting, managing, and debugging applications and websites on the cloud. Resumes without this expertise will not be considered. Open to freshers with a Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent coursework/projects). Strong knowledge of cloud platforms (AWS, DigitalOcean), including services like EC2, S3, RDS, Lambda, Droplets, or Spaces. Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation) or configuration management tools (e.g., Ansible, Chef, Puppet) is a plus. Knowledge of scripting languages such as Python, Bash, or PowerShell for automation is desirable. Familiarity with containerization and orchestration tools like Docker or Kubernetes is an advantage. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions is a plus. Knowledge of database management (PostgreSQL, MySQL) and networking concepts (VPC, DNS, load balancing). Strong problem-solving skills, with the ability to debug cloud infrastructure and application issues under guidance. Good communication skills and a collaborative mindset to work in a team environment. Why Join Us? Innovative Environment : Work on groundbreaking projects in a startup backed by the stability of Mindware. Professional Growth : Access continuous learning opportunities and mentorship to kickstart or advance your career in cloud and DevOps. Collaborative Culture : Join a diverse team of professionals from across India, united by a shared passion for innovation and excellence. Walk-in Interview Details We are conducting walk-in interviews for the DevOps Developer position. Bring your resume and a passion for cloud innovation. Location : Mindware Infotech, Dwarka, New Delhi Interview Dates : Monday, August 11, 2025, to Wednesday, August 13, 2025 Interview Time : 10:00 AM - 1:00 PM How to Apply Attend our walk-in interviews with your updated resume highlighting your DigitalOcean or AWS expertise in hosting, managing, and debugging. For inquiries, contact: Email : gulshanmarwah@indianbarcode.com WhatsApp : Varsha at 8527522688 Join Mindware Infotech and help shape the future of cloud-hosted solutions! Job Types: Full-time, Permanent, Fresher Schedule: Day shift Work Location: In person

Posted 1 day ago

Apply

10.0 years

3 - 3 Lacs

Noida

On-site

We’re Hiring Project Manage r Noida 10–15 years Up to ₹35 LPA Are you a tech visionary with strong leadership skills and deep full-stack expertise? Join our growing team that’s transforming industries with QR code and data analytics solutions focused on IoT, Blockchain, and Supply Chain Intelligence. What You'll Do: Lead and mentor a high-performing dev team (10–15 developers) Oversee multiple full-stack projects with cutting-edge tech Manage project delivery, client interactions, and deployment cycles Ensure code quality, scalability, security, and performance Bridge the gap between business needs and tech solutions Your Tech Toolkit: Frontend: React.js / Angular / Vue.js, HTML/CSS, Tailwind, Redux Backend: Node.js / Python / Java / .NET / PHP, REST & GraphQL APIs Databases: MySQL, PostgreSQL, MongoDB, Redis DevOps: Docker, Kubernetes, Jenkins, GitHub Actions, AWS/Azure Specialized: IoT & Blockchain API integration, regulatory compliance (DSCSA/GDPR) What We’re Looking For: 10+ years of full-stack development experience. 6+ years in technical team leadership Expertise in delivering scalable, secure enterprise applications Strong problem-solving and communication skills Perks: Work on transformative supply chain projects Growth & upskilling opportunities Collaborative and tech-first culture If you're ready to lead the future of tech-driven supply chains — Apply now. SR HR UMA +91 9920571936 Job Types: Full-time, Permanent Pay: ₹340,000.00 - ₹350,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Work Location: In person Speak with the employer +91 9920571936

Posted 1 day ago

Apply

5.0 years

19 - 39 Lacs

Noida

Remote

Sr Site Reliability Engineer (Location:- Bengaluru- India) RACE Consulting is hiring on behalf of one of our esteemed clients!We're looking for a highly skilled SRE professional with deep expertise in modern DevOps tools like Terraform, GitLab, Grafana, and Helm. If you're a self-starter with a strong background in cloud infrastructure, monitoring (Dynatrace), CI/CD, and Python, this could be your next big move. Experience: 5+ years* Location: Bengaluru/ Remote (India)* Key Skills: Terraform, GitLab, Dynatrace, Kubernetes, Python, Helm, Docker Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work Location: In person

Posted 1 day ago

Apply

0 years

4 - 7 Lacs

Noida

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant – Python developer with GCP and Kubernetes The ideal candidate will have experience working with GCP and have proficiency in Python development. You will play a key role in designing and developing, ensuring quality and integrity in design & development. Responsibilities Demonstrate proficiency as a Python framework developer. Design and develop robust Python frameworks, leveraging experience with Google Cloud Platform (GCP) for cloud-based solutions Apply expertise in Kubernetes for container orchestration and management Exhibit excellent communication skills for effective collaboration Qualifications we seek in you! Minimum Qualifications Experience with Python development, including designing frameworks and using Pandas Proficiency with GCP services and tools for cloud-based applications Expertise in Kubernetes for efficient container management Hands-on experience in an agile development environment Strong problem-solving skills and ability to troubleshoot complex issues Excellent communication skills and ability to work effectively in a fast-paced, team-oriented environment Preferred Qualifications/ Skills Possess excellent analytical and problem-solving skills, with keen attention to detail Demonstrate effective communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and stakeholders Prior experience in a consulting role or client-facing environment is highly desirable Advanced knowledge of Python, GCP, and Kubernetes to drive innovative solutions Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Noida Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Aug 6, 2025, 6:31:43 PM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 1 day ago

Apply

5.0 years

2 - 8 Lacs

Noida

On-site

Posted On: 6 Aug 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description 5+ years of enterprise test engineering (both manual and automation) Proficiency in Java, Kotlin or another similar programming language, Python basic underatnding is preferable. Good knowledge with Dev Ops, Jenkins , Kubernetes , Docker along with cloud – AWS, Azure Familiar with tools for API testing i.e Postman , Rest Assured , httpclient Familiar with tools for UI testing , i.e Selenium, Serenity, Cypress & similar tools Familiar with project tracking software such as JIRA and testing platforms such -Xray Broad understanding of computer science and Quality engineering principles Good understanding on Linux, Unix based system and shell scripting Proven track record of delivering test automation for highly complex software systems Experience planning for and executing end-to-end functional and non-functional tests Good communication skills Strong problem solving and analytical skills Comfortable and able to work under pressure Mandatory Competencies QA/QE - QA Automation - Core Java QA/QE - QA Automation - Framework creation for testing QA/QE - QA Automation - Python Beh - Communication QA/QE - QA Manual - API Testing Development Tools and Management - Development Tools and Management - Postman QA/QE - QA Automation - Rest Assured Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 day ago

Apply

3.0 years

3 - 4 Lacs

Lucknow

On-site

Backend Developer – Location: Lucknow Salary: ₹30,000 – ₹40,000 per month Experience: 3–5 years Job Type: Full-Time Qualifications: - B.Tech / BCA / MCA in Computer Science, IT, or a relevant field - Proven experience in backend development with Python and/or Node.js Required Skills: - Strong proficiency in Python (Django, Flask, FastAPI) or Node.js - Database expertise: PostgreSQL, MongoDB, Redis - RESTful API design and integration - Version control systems (Git) - Experience with containerized environments (Docker, Kubernetes is a plus) - Knowledge of cloud platforms (AWS/GCP/Azure is an advantage) - Understanding of security and data protection - Writing scalable, reusable, testable, and efficient code Responsibilities: - Design, implement, and maintain server-side logic - Develop and maintain APIs and microservices - Collaborate with frontend developers and DevOps teams to integrate systems - Optimize applications for speed and scalability - Troubleshoot and debug applications - Implement data storage solutions and manage database performance Job Type: Full-time Pay: ₹30,000.00 - ₹40,000.00 per month Work Location: In person Speak with the employer +91 8143775047

Posted 1 day ago

Apply

2.0 - 3.0 years

4 - 6 Lacs

Noida

On-site

Join our Team About this opportunity: Join Ericsson as an Oracle Database Administrator and play a key role in managing and optimizing our critical database infrastructure. As an Oracle DBA, you will be responsible for installing, configuring, Upgrading and maintaining Oracle databases, ensuring high availability, performance, and security. You’ll work closely with cross-functional teams to support business-critical applications, troubleshoot issues, and implement database upgrades and patches. This role offers a dynamic and collaborative environment where you can leverage your expertise to drive automation, improve efficiency, and contribute to innovative database solutions. What you will do: Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Experience with Container Databases (CDBs) and Pluggable Databases (PDBs) for better resource utilization and simplified management. High availability configuration using Oracle Dataguard, PostgreSQL, MySQL replication, and/or MariaDB Galera clusters. Oracle Enterprise Manager administration which includes alarm integration. Familiarity with Linux tooling such as iotop, vmstat, nmap, OpenSSL, grep, ping, find, df, ssh, and dnf. Familiarity with Oracle SQL Developer, Oracle Data Modeler, pgadmin, toad, PHP, MyAdmin, and MySQL Workbench is a plus. Familiarity with NoSQL, such as MongoDB is a plus. Knowledge of Middle ware like Golden-gate both oracle to oracle and oracle to BigData. Oracle, PostgreSQL, MySQL, and/or MariaDB database administration in production environments. Conduct detailed performance analysis and fine-tuning of SQL queries and stored procedures. Analyze AWR, ADDMreports to identify and resolve performance bottlenecks. Implement and manage backup strategies using RMAN and other industry-standard tools. Performing pre-patch validation using opatch and datapatch. Testing patches in a non-production environment to identify potential issues before applying to production. Apply Oracle quarterly patches and security updates. Implement and manage backup strategies using RMAN and other industry-standard tools. The skills you bring: Bachelor of Engineering or equivalent experience with at least 2 to 3 years in the field of IT. Must have experience in handling operations in any customer service delivery organization. Thorough understanding of basic framework of Telecom / IT processes. Willingness to work in a 24x7 operational environment with rotating shifts, including weekends and holidays, to support critical infra and ensure minimal downtime. Strong understanding of Linux systems and networking fundamentals. Knowledge of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a plus. Oracle Certified Professional (OCP) is preferred Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770689

Posted 1 day ago

Apply

4.0 years

19 - 39 Lacs

Noida

On-site

Sr Software Engineer (3 Openings) RACE Consulting is hiring for one of our top clients in the cybersecurity and AI space. If you're passionate about cutting-edge technology and ready to work on next-gen AI-powered log management and security automation, we want to hear from you! Role Highlights:Work on advanced agentic workflows, threat detection, and behavioral analysis Collaborate with a world-class team of security researchers and data scientists Tech stack: Scala, Python, Java, Go, Docker, Kubernetes, IaC Who We're Looking For:4+ years of experience in backend developmentStrong knowledge of microservices, containerization, and cloud-native architectureBonus if you’ve worked in cybersecurity or AI-driven analytics Job Type: Full-time Pay: ₹1,950,000.00 - ₹3,900,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work Location: In person

Posted 1 day ago

Apply

25.0 years

55 Lacs

Ahmedabad

On-site

Join Our Team at Litera: Where Legal Technology Meets Excellence Litera has been at the forefront of legal technology innovation for over 25 years, crafting legal software to amplify impact and maximize efficiency. Developed by the best legal minds in the industry, our comprehensive suite of integrated legal tools is both powerful and user-friendly and simplifies the way modern firms manage core legal workflows, secure collaboration, and organize firm knowledge and experience. Every day, we help more than 2.3 million legal professionals focus on their craft. Litera: Less busy work, more of your life’s work. The Opportunity: We seek a highly skilled and experienced Sr. Software Architect who will demonstrate technical leadership and strong communication and presentation skills to join our dynamic and innovative technology team. As a member of the team, you will be responsible for contributing to architectural designs and implementing technology solutions at Litera. This is a unique opportunity to contribute to technical strategy and architectural decisions within a growing and ever-changing ecosystem. The role works closely with both the Engineering and Product teams and spans both products and the underlying technology platform. Our customers rely on us to deliver innovative, strategic and forward-looking solutions. This is your chance to be a key member of a global software company. Responsibilities Develop innovative architectural solutions in a wide variety of problem sets and domains. Create and share architectural designs, best practices and technology roadmaps with cross-functional teams. Collaborate with senior architects to implement business and product requirements into technical solutions and software architectures. Focus on non-functional requirements, including deployment needs, scalability, performance and reliability when developing architectures and best practices. Develop "proof of concept" solutions to help demonstrate and communicate technical designs and desired architectures. Develop architecture and guidelines to move on-perm systems to SAAS based multi-tenant solutions on Azure Cloud. Support adoption of new technology within assigned teams and projects by delivering Proof of Concepts with supporting design diagrams, technical documentation, business impact analysis and ROI. Optimize cloud-based systems for high availability, fault tolerance, and disaster recovery. Mentor junior developers and contribute to team knowledge sharing to support common platform architecture goals. Qualifications 8-10 years of software development experience with excellent .NET/C#, React and Typescript coding skills. 5-7 years of experience working as a Software Architect. 3-5 years of experience with Microsoft Azure and/or AWS cloud-native architectures. Outstanding communication and teamwork skills, with a proven ability to collaborate with, and influence others. BS degree in Computer Science, Computer Information Systems, or Engineering (or related experience). Solid understanding of distributed systems architecture and microservices. Experience with building highly scalable and resilient cloud services. Experience developing reference architectures and proof of concept prototypes. Experience implementing modern security solutions including token-based authentication, OAuth 2.0 workflows, SAML authentication and authorization techniques. Preferable experience: Azure OpenAI/GPT/LLMs, Azure Kubernetes Service, Azure Service Bus, Azure Storage, Azure SQL, Azure CosmosDB, Lucene/Elasticsearch, Azure DevOps, CI/CD. Preferable certifications: Microsoft or other industry certifications in architecture a plus. Clearance of standard background check prior to employment with candidate consent. Career Progression Timeline Within 1 month, you will: Learn the functional areas of the products and intended uses. Establish relationships with key members of the leadership and architecture teams. Participate and contribute to assigned work activities and meetings (planning, daily standups, etc.). Acclimate to the environment and begin to gain insights into the technology and innovation opportunities that exist in the company. Contribute thoughts, ideas, relevant expertise in at least one key strategic initiative. Goal: expand to multiple initiatives over time. Review and learn key architecture team artifacts and ways of working – Product Inventory, Reference Architecture. Within 3 months, you will: Contribute reference architecture and proof of concept improvements regularly. Develop subject matter expertise in more than one product area. Contribute to translating product vision into technical requirements and designs. Support architectural planning initiatives. Active technical contribution – sharing ideas and architectural insights within your project teams. Within 6 months, you will: Collaborate with other development team members to build robust architectures and product integrations. Identify and propose opportunities for product and technology improvement. Be a key technical resource for initiatives you are involved in. Contribute technical expertise and support decision-making. Why Join Litera? The company culture: We emphasize helping each other grow, doing the right thing always, and being part of a journey to amplify impact, creating an exciting and fulfilling work environment Commitment to Employees : Our people commitment is based on what employees love most about being part of the team, focusing on tools that matter to the difference-makers in the legal world and amplifying their impact Global, Dynamic, and Diverse Team : Our is a global company with ambitious goals and unlimited opportunities, offering a dynamic and diverse work environment where employees can grow, listen, empathize, and problem-solve together Comprehensive Benefits Package: Experience peace of mind with our health insurance, retirement savings plans, generous paid time off, and a supportive work-life balance. We invest in your well-being and future, ensuring a rewarding career journey. Career Growth and Development : We provide career paths and opportunities for professional development, allowing employees to progress through various technical and leadership roles Job Type: Full-time Pay: From ₹5,500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Expected Start Date: 30/08/2025

Posted 1 day ago

Apply

5.0 years

4 - 5 Lacs

Ahmedabad

On-site

Position - 01 Job Location - Ahmedabad Qualification - Any Graduate Years of Exp - 5+ years About us Bytes Technolab is a full-range web application Development Company, establishedin the year 2011, having international presence in the USA and Australia and India. Bytes exhibiting excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services ever since its inception. Roles & responsibilities Design, implement, and maintain scalable and reliable cloud infrastructure solutions using GCP and AWS services. Deploy, configure, and manage Kubernetes clusters on GCP and AWS, ensuring seamless integration with RabbitMQ. Collaborate with software development teams to optimize application deployment, monitoring, and performance in a cloud environment. Implement and manage RabbitMQ messaging queues for efficient and reliable communication between services. Develop and maintain CI/CD pipelines for automated application deployment and release management, including integration with RabbitMQ. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to ensure consistent and repeatable deployments. Monitor and troubleshoot system and application performance, including RabbitMQ queue monitoring and optimization. Conduct regular audits and ensure compliance with industry best practices and security standards, especially regarding messaging queue security. Collaborate with cross-functional teams to identify and resolve infrastructure and deployment-related issues, with a focus on messaging queues. Stay up-to-date with the latest trends and technologies in the DevOps, cloud computing, and messaging queue domains, and evaluate their applicability to the organization. Skills required Bachelor's degree in computer science, engineering, or a related field (or equivalent work experience). Proven experience as a DevOps Engineer or similar role, with a focus on Kubernetes, GCP, AWS, and RabbitMQ. Strong knowledge of Kubernetes, including cluster management, deployment, and troubleshooting. Hands-on experience with GCP services, such as Compute Engine, Kubernetes Engine, Cloud Storage, and Cloud Networking. Familiarity with AWS services, including EC2, ECS/EKS, S3, RDS, and CloudFormation. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and infrastructure management. Experience with configuration management tools like Ansible, Chef, or Puppet. Solid understanding of CI/CD principles and experience with CI/CD tools like Jenkins, GitLab CI/CD, or CircleCI. Knowledge of containerization technologies like Docker and container orchestration platforms like Kubernetes. Knowledge on GoLang is a plus. Strong understanding of RabbitMQ, including setup, configuration, clustering, and message reliability. Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed, cloud-based environment. Excellent communication and collaboration skills to work effectively in cross-functional teams.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies