Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Karnataka, India
On-site
Description: To deliver and maintain IT-applications and –services in order to realize the Bank strategy in the field of information technology. Engineers in this job category work in an agile way, in squads to deliver short-cycle full-fledged IT products. The DevOps Engineer is responsible for the system to work as front liners. Their role is to define strategy and lead the implementation of DevSecOps pipelines with the bank's digital and non-digital journeys. Orchestrates build and release pipelines and ensure seamless application promotion for all the digital squads. Provide necessary support to Agile coach, scrum master, development squads and automate complete rollouts including non-production and production for all the applications which includes APIs, Database promotions as well. Also ensure the container platform configuration setup and availability in azure cloud environment. • Uses his/her technical expertise and experience to contribute to all sprint events (planning, refinements, retrospectives, demos) • Consults with the team about what is needed to fulfil the functional and non-functional requirements of the IT product to be build and released • Define and orchestrate the DevOps for the IT product, Enable automated the unit test in line with the customer’s wishes and IT area’s internal ambitions and reviews colleagues’ IT products. • Define, designs and enable automated builds and automated tests IT products (functional, performance, resilience and security tests). • Performs Life Cycle Management (including decommissioning) for IT products under management • Define and Improves the Continuous Delivery process • Sets up the IT environment, deploys the IT product on the IT infrastructure and implements the required changes • Sets up monitoring of IT product usage by the customer Operating Environment, Framework and Boundaries, Working Relationships • Works within a multidisciplinary team or in an environment in which multidisciplinary teamwork is carried out. • Is primarily responsible for the automated non-production and production rollouts (or technical configuration) of software applications. • The range of tasks includes the following: o The analysis and design of the DevOps solution for any application (or the technical configuration); o Coding and review the pipelines and/or package integration in one programming languages, scripting languages and frameworks: Azure/AWS/Cloud Pak DevOps services Pipeline creation using the templates and enhance the existing templates based on the needs Integration with various DevOps tools like SonarQube, Veracode, Twistlock, Ansible, Terraform, hashicorp vault. Azure test plan setup and configuration with pipelines Cloud based deployments for Springboot Java, reactjs, nodejs, .Net core and using native K8 and AKS/EKS/OpenShift Experience in setting up Kubernetes cluster with ingress controllers (nginx and nginx+) Experience in python, shell scripting. Logging and monitoring using Splunk, EFK and ELK Middleware on-premises automated deployments for WebSphere, Jboss, BPM, IIS and IIB Expert in OS - RHEL, CentOS, Ubuntu Experience in Liquibase/Flyway for DB automations. o Basic application development knowledge for cloud native and traditional apps o API Gateway ad API deployments Database systems, with knowledge of SQL and NoSQL stores (e.g. MySQL, Oracle, MongoDB, Couchbase, etc.) o Continuous Delivery (Compile, Build, Package, Deploy); o Test-Driven Development (TDD) and test automation (e.g. regression, functional and integration tests); debugging and profiling; o software configuration management and version control. o Work in an agile/scrum environment, meeting sprint commitments and contributing to the agile process o Maintain traceability of testing activities REQUIREMENTS: • 8 to 10 years of overall experience and 6 to 7 years’ experience as a DevOps Engineer in defining the solution and implementing it common on-premises and cloud platforms with scripting languages and frameworks expertise • Expert in Azure DevOps services • Hands-on in Pipeline creation using the templates and enhance the existing templates based on the needs • Able to perform Integration by coding the templates with various DevOps tools like SonarQube, Veracode, Twistlock, Ansible, Terraform, hashicorp vault and UCD. • Implement Azure test plan setup and configuration with pipelines • Automate cloud-based deployments for Springboot Java, reactjs, nodejs, .Net core and using native K8 and AKS/EKS/OpenShift • Experience in setting up Kubernetes cluster with ingress controllers (nginx and nginx+) • Expert in Python/shell scripting • Expert in DB automation tools (Flyway/Liquibase) • Experience in Azure files and synz solution implementation • Experience in Logging and mentoring using Splunk, EFK and ELK • Experience in Middleware on-premises automated deployments for WebSphere, Jboss, BPM, IIS and IIB • Expert in OS - RHEL, CentOS, Ubuntu • Knowledge on IBM cloudpak using redhat OpenShift • Basic application development knowledge for cloud native and traditional apps • Experience in API Gateway ad API deployments • Nice to have knowledge of immutable infrastructure, infrastructure automation and provisioning tools • Strong understanding of Agile methodologies • Strong communication skills with ability to communicate complex technical concepts and align organization on decisions • Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply • Utilizes team collaboration to create innovative solutions efficiently • Passionate about technology and excited about the impact of emerging/disruptive technologies • Believes in culture of brutal transparency and trust • Open to learning new ideas outside scope or knowledge
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-928 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. And we're only getting started in Bengaluru , India ! As a Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 3+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 5 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Python Data engineer Location: Bangalore/Chennai(Hybird) JD: Minimum 4 years of Python development experience across a broad technology stack · Experience with Python-based testing · Proven experience in Data processing (ETL/ELT, SQL, data validation and transformation) · Proven experience in Design, build and resolve problems across the Apache Airflow data orchestration and movement platform (For backfilling the current 2 resignations, maybe this could be considered as a good to have skillset) · Must have excellent organisational skills and the ability to collaborate with people at different levels · Broad knowledge of Financial Service products and processes · Broad knowledge of Cloud native technologies (Kubernetes, Docker, REST APIs, S3, Spark)
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
KEY RESPONSIBILITIES • Design, build, and deploy end-to-end Generative AI applications using LLMs, vision-language models, and multimodal AI. • Proficiency in Python with strong experience using GenAI/LLM frameworks such as LangChain, LlamaIndex, CrewAI, or HuggingFace Transformers. • Develop applications using OpenAI, Azure OpenAI, or custom-deployed open-source models (e.g., LLaMA, DeepSeek, Mistral). • Create and manage agent-based architectures for task orchestration using frameworks like AutoGen, CrewAI, or Semantic Kernel. • Implement Retrieval-Augmented Generation (RAG) systems using FAISS, Weaviate, or Azure Cognitive Search. • Work with document loaders, chunking strategies, vector embeddings, and prompt engineering techniques for optimal model performance. • Build and deploy AI-powered apps using FastAPI (backend), React/Streamlit/Gradio (frontend), and PostgreSQL or vector DBs (Pinecone, Qdrant). • Design and operationalize GenAI workflows using Docker, Kubernetes, and MLOps tools (e.g., MLFlow, Azure ML). • Integrate AI with cloud-native services (e.g., Azure Functions, EventHub, Cosmos DB, OneLake, Azure Fabric). • Continuously improve model accuracy and relevance through fine-tuning, prompt tuning, and response evaluation techniques. • Ensure alignment of GenAI solutions with product goals and compliance with security, governance, and data privacy standards. • Evaluate trade-offs across different model providers (OpenAI, Azure, Anthropic, open-source) based on latency, cost, accuracy, and IP risk. • Collaborate with cross-functional teams including Product, Data Engineering, and QA to ensure robust deployment of GenAI features. • Implement logging, monitoring, and feedback loops to support performance tuning and hallucination mitigation
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Full stack Developer Experience: 5-9 Years Mandatory Skills: Python, Flask, Django, ReactJs, AWS Required skills: Bachelor’s degree in Computer Science or equivalent practical experience 5–7 years of experience with backend programming languages: Python, Django 3–5 years of experience with front-end programming languages: React, Angular 3–5 years of experience developing RESTful API applications 3–5 years of experience with SQL and NoSQL databases (MySQL, AWS Aurora, MongoDB, Redis, Cassandra, Redshift, AWS Dynamo) 1–3 years of experience in AWS Cloud (EC2, Kubernetes, Terraform) 1–3 years of experience in Azure Cloud 5–7 years of experience using Git version control and GitHub Hands-on experience with Jenkins for CI/CD pipelines Proficient in using Jira, Confluence, and Slack
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Project Description: We are launching several projects in Finastra Treasury space, including greenfield implementation, extension to back-office and new products and features implementation. This includes both internal projects and client engagements for clients across the globe. Responsibilities: Technical configurations and installations of Finastra Kondor suite of products Validation of requirements & technical specifications with IT experts Participation in system implementation, integration testing, go live and DR activities Planning and Effective participation in solution designing & Implementation Communicate status and report issues to the team leader Mentor junior resources in the team to get them up to speed Ability to work on multiple projects and still be able to deliver Openness to take that extra mile while troubleshooting issues and implementation outside business hours of the client Mandatory Skills Description: Minimum 10 years of Kondor technical experience Good experience of installations of Treasury systems like Kondor+, K+TP, Fusion Risk etc and troubleshooting technical issues Good knowledge of Linux, Unix, SQL, Shell scripts and databases Working experiences with Batches, Kubernetes Working exposure to cloud platforms like Azure Nice-to-Have Skills Description: Good experience in working on containers like Kubernetes especially AKS Good experience in working on cloud platforms like Azure
Posted 5 days ago
4.0 years
0 Lacs
India
Remote
Job Title: Databricks Engineer Location: Remote Experience Level: 4-5 Years Employment Type: Full-time Required Qualifications: 6–7 years of experience in data engineering, with at least 3+ years working with Databricks in production environments. Strong proficiency in Python and SQL . Experience with Spark (PySpark/Scala), preferably in a Databricks environment. Experience building and managing data pipelines on AWS , Azure , or GCP . Solid understanding of data lake , data warehouse , and data mesh architectures. Familiarity with modern data formats like Parquet , Avro , Delta Lake , etc. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Strong understanding of data quality, observability, and governance best practices.
Posted 5 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 5 days ago
3.0 - 12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We’re looking for passionate and skilled Java Developers with 3 to 12 years of experience in Spring Boot and Microservices to join our growing team in Trivandrum ! Key Skills: Java (Core & Advanced) Spring Boot Microservices Architecture RESTful APIs Docker/Kubernetes (preferred) CI/CD tools (Jenkins, Git, etc.) Location: Trivandrum Experience Required: 3 to 12 years Apply Now: https://usource.ripplehire.com/s/tguz
Posted 5 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 5 days ago
7.0 - 12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position- Sr DevOps Engineer Experience- 7 to 12 years Location- Mumbai (Goregaon) Required Skills & Qualifications: 7–10 years of hands-on experience in DevOps, CI/CD, and infrastructure automation . Proficiency in scripting languages: Shell, Bash, Python, Perl, Groovy . Strong understanding of version control systems and CI/CD tools. Deep knowledge of Linux/Unix systems administration . Experience working with web servers/app servers (Tomcat, WebLogic). Familiarity with containerization and cloud platforms like Docker, Kubernetes, AWS . Exposure to job scheduling tools and monitoring frameworks . Strong troubleshooting and analytical skills. Excellent communication and collaboration abilities.
Posted 5 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We are seeking a highly skilled and experienced Platform Engineer to manage and enhance our entire application delivery platform, from Cloudfront to the underlying EKS clusters and their associated components. The ideal candidate will possess deep expertise across cloud infrastructure, networking, Kubernetes, and service mesh technologies, coupled with strong programming skills. This role involves maintaining the stability, scalability, and performance of our production environment, including day-to-day operations, upgrades, troubleshooting, and developing in-house tools. Main Responsibilities Perform regular upgrades and patching of EKS clusters and associated components & oversee the health, performance, and scalability of the EKS clusters. Manage and optimize related components such as Karpenter (cluster autoscaling) and ArgoCD (GitOps continuous delivery). Implement and manage service mesh solutions (e.g., Istio, Linkerd) for enhanced traffic management, security, and observability. Participate in an on-call rotation to provide 24/7 support for critical platform issues and monitor the platform for potential issues and implement preventative measures. Develop, maintain, and automate in-house tools and scripts using programming languages like Python or Go to improve platform operations and efficiency. Configure and manage CloudFront distributions, WAF Policies for efficient & secure content delivery & routing. Develop and maintain documentation for platform architecture, processes, and troubleshooting guides. Tech Stack AWS: VPC, EC2, ECS, EKS, Lambda, Cloudfront, WAF, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK Terraform, Github Actions, Prometheus, Grafana, Atlantis, ArgoCD, OpenTelemetry Required Skills and Experiences Proven 6+ Years experience as a Platform Engineer, Site Reliability Engineer (SRE), or similar role with a focus on end-to-end platform ownership. In-depth knowledge and hands-on experience of at least 4 years with Amazon EKS and Kubernetes. Strong understanding and practical experience with Karpenter, ArgoCD, Terraform.. Solid grasp of core networking concepts and extensive experience of at least 5 years with AWS networking services (VPC, Security Groups, Network ACLs, CloudFront, WAF, ALB, DNS). Demonstrable experience with SSL/TLS certificate management. Proficiency in programming languages such as Python or Go for developing and maintaining automation scripts and internal tools. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Excellent problem-solving and debugging skills across complex distributed systems. Strong communication and collaboration abilities. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Preferred Qualifications Prior experience working with service mesh technologies (preferably Istio) in a production environment. Experience building or contributing to Kubernetes Controllers. Experience with multi-cluster Kubernetes architectures. Experience building AZ isolated, DR architectures. Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team.
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Opening: Node.js/NestJS Developer (3+ Years Experience) Location: Delhi / Gurgaon (Work from Office) Job Type: Full-time, On-site Experience Required: 3+ Years About the Role We are looking for a skilled and motivated Node.js/NestJS Developer to join our in-house engineering team. In this role, you will be responsible for developing scalable backend services, API integrations, and microservice-based architectures. You will play a key role in designing and delivering secure, high-performance server-side solutions that support our growing product and client needs. If you're passionate about backend development, system design, and working in a collaborative environment — we’d love to meet you. Key Responsibilities Design, develop, and maintain scalable server-side applications using Node.js and NestJS. Build and integrate RESTful and GraphQL APIs. Write clean, efficient, and well-documented code. Collaborate with front-end developers, designers, and product managers to deliver high-quality software solutions. Optimize applications for maximum speed and scalability. Participate in code reviews, testing, and deployment processes. Debug and troubleshoot production issues, ensuring smooth system performance. Implement security and data protection best practices. Required Skills & Qualifications Minimum 3 years of hands-on experience in Node.js and NestJS. Strong proficiency in JavaScript and TypeScript. Solid understanding of RESTful APIs, WebSockets, and GraphQL. Experience working with SQL and NoSQL databases like PostgreSQL, MySQL, MongoDB. Familiarity with Docker, Git, CI/CD pipelines. Knowledge of unit testing frameworks. Good understanding of microservices architecture and API gateway concepts. Strong problem-solving skills and attention to detail. Cloud Experience: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud DevOps Knowledge: Experience with CI/CD pipelines and containerization technologies like Docker and Kubernetes Experience with Swagger for API documentation and creating stakeholder documents Experience with API authentication and security best practice. Experience with agile development methodologies and tools Nice to have Skills Cloud Platform Experience: Familiarity with AWS, Google Cloud Platform (GCP), or Microsoft Azure for deploying and managing backend services. Containerization & Orchestration: Experience with Docker, Docker Compose, and basic understanding of Kubernetes for managing microservices in production. Messaging & Queues: Experience with message brokers like RabbitMQ, Kafka, or Redis Pub/Sub for building event-driven architectures.
Posted 5 days ago
610.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We are seeking a skilled Backend Engineer (.NET Core) with a strong focus on multitenant architecture, scalability, and performance. You will be a key contributor to our platform engineering team, responsible for building robust, scalable, and secure backend systems that support multiple tenants in a distributed cloud-native environment. This role is ideal for someone who combines technical depth in backend engineering with a passion for engineering excellence, non-functional requirements (NFRs), and platform scalability. Key Responsibilities Design and develop high-performance, scalable backend services using C# and .NET Core. Build and maintain RESTful APIs and microservices for a multitenant SaaS platform. Drive engineering best practices, including code quality, design patterns, and SOLID principles. Work with cloud platforms (Google Cloud Platform & Azure) to implement cloud-native, multitenant solutions. Implement and maintain containerized applications using Docker, Kubernetes, and Helm. Ensure robust handling of non-functional requirements, including: Tenant isolation Secure multi-tenancy Performance optimization Scalability across tenants Observability and tenant-specific monitoring Develop and maintain automated testing frameworks (unit, integration, E2E). Utilize CI/CD and GitOps workflows, leveraging tools such as Terraform and Helm, for Infrastructure as Code (IaC). Collaborate in Agile environments using Scrum or Kanban methodologies. Identify and proactively mitigate risks, dependencies, and bottlenecks to ensure optimal performance. Must-Have Skills 610 years of backend development experience with strong hands-on skills in: C#, .NET Core RESTful API development Experience in asynchronous programming, event-driven architecture, and Pub/Sub systems. Strong foundation in OOP, data structures, and algorithms. Proficient with Docker, Kubernetes, and Helm. Experience with CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm). Strong understanding of multi-tenant architecture and NFRs: Tenant isolation Shared vs. isolated models Security and resource partitioning Performance tuning per tenant Proficient with relational databases (PostgreSQL preferred), with exposure to NoSQL. Experience working in Agile/DevOps environments. Nice-to-Have Skills Experience with frontend technologies (React.js) for occasional full-stack collaboration. Knowledge of modern API frameworks: gRPC, GraphQL, etc. Familiarity with feature flagging, blue-green deployments, and canary releases. Exposure to monitoring, logging, and alerting systems for multitenant environments. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Certifications in Azure or Google Cloud Platform are a plus.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Work Location : Trivandrum/Pune Role: Automation and DevSecOps Engineer Experience: 5-8 yrs Skills: Python Programming with Devops , SIEM+SOAR JOB Description :- At Company we take pride in delivering the comprehensive Information Security Services to our Internal customers. The Security services are based on Security platforms hosted on SaaS and IaaS platforms on private and public cloud service providers. The DevOps engineer will work in developing and implementing infrastructure in support of Web and backend applications deployment in the Information Security Services division. This position will closely work with TechOps team, Cloud providers and OEMs to ensure integration and automation towards efficient, clean and reusable code base to empower DevOps in our Security Services area. Key Skills: Collaborate with development and operations teams to design, build, and deploy applications/scripts that automate routine manual processes, with a strong focus on security orchestration and automated response (SOAR) capabilities. Proficient in developing Python scripts to automate routine tasks using cron jobs, scheduler services,or any workflow management tools, particularly in the context of security automation. Ability to work closely with the operations team to understand, support, and resolve all technical challenges in routine operations, with a focus on enhancing security measures through automation. Identify areas for improvement in existing programs and develop modifications that enhance security and automate response actions. Possess strong analytical and debugging skills, especially in the context of security incident response and automation. Demonstrated experience in integrating REST API frameworks with third-party applications, particularly for security orchestration purposes. Knowledge of DevOps tools like Jenkins, Terraform, AWS CloudFormation, and Kubernetes, with an emphasis on their use in security automation. Hands-on experience working with DBMS like MySQL, PostgreSQL, and NoSQL, with a focus on secure data management. Comfortable working with Linux OS, especially in environments requiring secure configurations and automated security responses. Keen interest and proven track record in automation both on-premise and in the cloud, with a focus on security orchestration and automated response. Expertise in Git, Jenkins, and JIRA is a plus, particularly in the context of managing security-related projects. Primarily Skill required: Deep understanding of security concepts and the ability to work with security analysts to implement automation requirements for security orchestration and automated response. Scripting – Python (Mandatory), JavaScript/Shell scripting, with a focus on security automation. Jenkins, GitHub Actions - CI/CD, with a focus on automating security processes. Containerized infrastructure management – Docker, Podman, K8s, with an emphasis on secure deployments. AWS, Azure – ability to provision and manage infrastructure securely. Version control systems - Git, with a focus on managing security-related code and configurations. Good to have Entry level security certification (CompTIA Security+ or similar) Ansible knowledge Understanding of reporting tools – e.g. Grafana Initial exposure for Google Security Operations (SIEM+SOAR) suite Educational & Professional Qualifications Bachelor’s / Master’s full-time degree in a Technical stream
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: UI/UX Design And Development ·Location: Hyderabad · Experience: 5+ · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Primary Skills: UI and Blazor Required Experience UIUX Experience Expert in HTML CSS Component Based Architecture Solid understanding of modern UI frameworks Blazor Handson experience NET JS Interop Practical knowledge UI Testing Playwright or equivalent C# Proficiency Performance Optimization Ability to create fast responsive UIs Security Best Practices Familiar with UI level security patterns RealTime Features Experience with live updates eg SignalR Redis Preferred Nice to Have Containerization Docker Kubernetes understanding CICD Experience with GitHub Actions or similar pipelines Cloud Native Development Component Libraries Familiarity with Blazor UI libraries Key Responsibilities Develop component based UIs using Blazor Implement UIUX designs with HTML and CSS Integrate NET and JavaScript via JS Interop Write and maintain automated UI tests Playwright Build performant Realtime interfaces SignalR Redis Channels Enforce UI security roles permissions and data protection
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
DevOps/Cloud Engineer (AWS Specialist) – Chennai 5-8 years experience Immediate Requirement: DevOps/Cloud Engineer (AWS Specialist) – Chennai We are seeking a highly skilled and certified DevOps/Cloud Engineer with 5-8 years of hands-on AWS experience to join our team in Chennai. This is an urgent requirement, and we are looking for candidates who can join immediately. This is a work-from-office position; remote work is not available. Responsibilities: As a DevOps/Cloud Engineer, you will be instrumental in designing, implementing, and managing scalable and secure cloud infrastructure on AWS. Your key responsibilities will include: Designing and deploying robust, scalable, and highly available applications on AWS. Managing and orchestrating containerised applications using Docker and Kubernetes. Configuring and troubleshooting VPC networks to ensure secure and efficient connectivity. Implementing and maintaining comprehensive monitoring solutions using tools like Grafana, ELK Stack, DataDog, and New Relic to ensure optimal system performance and proactive issue resolution. Developing and maintaining automation scripts using Python, Shell, and Java to streamline operational tasks and improve efficiency. Collaborating closely with development and operations teams to foster a culture of continuous integration and continuous delivery (CI/CD). Troubleshooting and resolving complex infrastructure and application issues on time. Ensuring adherence to best practices in cloud security and compliance. Required Skills & Experience: 4-8 years of hands-on experience with Amazon Web Services (AWS). Proven expertise in EC2, Docker, Kubernetes, and VPC. Demonstrable experience with monitoring and dashboarding tools, specifically Grafana, ELK Stack, DataDog, and New Relic. Strong scripting abilities in Python, Shell, and Java. Certifications in DevOps or Cloud Computing (AWS certifications preferred) are highly valued. Excellent problem-solving skills and a strong understanding of cloud architecture principles. Ability to work independently and as part of a collaborative team in a fast-paced environment. Work Location: Chennai (Work from office) Immediate joiners are preferred Apply: hr@letzbizz.com #DevOpsJobs #AWSJobs #CloudEngineer #ChennaiJobs #ImmediateJoiners #WorkFromOffice #HiringNow #LetzBizz
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are looking for a highly skilled Java Full Stack Developer who is proficient in designing and developing scalable applications using Core Java, Spring framework, and modern front-end technologies like Angular or React. The ideal candidate will also have experience working with cloud platforms (preferably GCP), RESTful APIs, microservices, and exposure to Big Data technologies and DevOps practices. Key Responsibilities: Design, develop, and maintain robust Java-based backend systems using Core Java, Spring Boot, and microservices architecture. Build responsive, user-friendly front-end interfaces using Angular or React. Develop and integrate REST APIs for seamless data communication between services. Work on cloud deployments, preferably on Google Cloud Platform (GCP). Optimize application performance and ensure high availability and scalability. Collaborate with cross-functional teams for design, development, and deployment. Write clean, efficient, and well-documented code following best practices. Participate in code reviews and mentor junior developers. Integrate Big Data tools (Spark, Hadoop, etc.) where applicable. Implement CI/CD pipelines and support infrastructure automation using DevOps tools. Must-Have Skills: Strong proficiency in Core Java and Spring (Spring Boot) Experience with REST API and Microservices Architecture Proficient in front-end development using Angular or React Hands-on experience with cloud platforms (preferably GCP ) Solid understanding of SQL and relational databases Experience working in Agile/Scrum environments Good to Have: Exposure to Big Data frameworks like Spark , Hadoop , etc. Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, etc.) Experience with NoSQL databases and data streaming platforms Soft Skills: Strong problem-solving and analytical skills Excellent communication and collaboration abilities Self-motivated with a proactive attitude
Posted 5 days ago
9.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title : Solution Architect – .NET Core, Messaging Systems, Red Hat Linux, MS SQL Server, Container Orchestration Location : Pune, Mumbai, Bangalore Job Type : Full-Time | Permanent Role Overview : We are looking for a seasoned Solution Architect with deep technical expertise in .NET Core development, enterprise messaging systems, Red Hat Linux environments, MS SQL Server, and container orchestration platforms. This role involves designing scalable, secure, and high-performance solutions that align with business goals and technical strategy. Key Responsibilities : Architecture & Design Lead the design and architecture of enterprise-grade applications and services using .NET Core. Define and document architectural patterns, standards, and best practices. Architect event-driven and message-oriented systems using technologies like HIVEMQ, Kafka, RabbitMQ, or Other. Design and implement containerized solutions using Docker and orchestrate them with Kubernetes or OpenShift. Technical Leadership Provide technical leadership across multiple projects and teams. Collaborate with stakeholders to translate business requirements into scalable technical solutions. Conduct architecture reviews and ensure alignment with enterprise standards. Mentor development teams and promote architectural excellence. Infrastructure & Integration Work with Red Hat Linux systems for deployment, configuration, and performance tuning. Design and optimize MS SQL Server databases for scalability and reliability. Integrate solutions with cloud platforms (Azure, AWS, or GCP) and on-premise infrastructure. Governance & Compliance Ensure solutions adhere to security, compliance, and governance standards. Participate in risk assessments and mitigation planning. Required Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 9+ years of experience in software development and architecture. Strong hands-on experience with .NET Core, C#, and related frameworks. Deep understanding of messaging systems (HIVEMQ , Kafka, RabbitMQ, Azure Service Bus). Proficiency in Red Hat Linux system administration and scripting. Expertise in MS SQL Server design, optimization, and troubleshooting. Solid experience with Docker, Kubernetes, and container orchestration. Familiarity with CI/CD pipelines, DevOps practices, and cloud-native architectures. Excellent communication, documentation, and stakeholder management skills. Good to have Qualifications : Certifications in Azure Solutions Architect, Red Hat Certified Architect, or Kubernetes (CKA/CKAD). Experience with microservices, API gateways, and service mesh. Exposure to monitoring and logging tools like Prometheus, Grafana, ELK Stack.
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us slice the way you bank slice’s purpose is to make the world better at using money and time, with a major focus on building the best consumer experience for your money. We’ve all felt how slow, confusing, and complicated banking can be. So, we’re reimagining it. We’re building every product from scratch to be fast, transparent, and feel good, because we believe that the best products transcend demographics, like how great music touches most of us. Our cornerstone products and services: slice savings account, slice UPI credit card, slice UPI, slice UPI ATMs, slice fixed deposits, slice borrow, and UPI-powered bank branch are designed to be simple, rewarding, and completely in your control. At slice, you’ll get to build things you’d use yourself and shape the future of banking in India. We tailor our working experience with the belief that the present moment is the only real thing in life. And we have harmony in the present the most when we feel happy and successful together. We’re backed by some of the world’s leading investors, including Tiger Global, Insight Partners, Advent International, Blume Ventures, and Gunosy Capital. About the role We are looking for a Site Reliability Engineer with experience in building and implement functional systems that improve customer experience. Site Reliability Engineer responsibilities include deploying product updates, identifying production issues and implementing integrations that meet customer needs. Ultimately, you will execute and automate operational processes fast, accurately and securely. What you'll do Designing and implementation of IT Infra including Networking, Storage, Compute, Backup and Security. Design and implement power distribution systems, optimize power usage efficiency and ensure redundancy to minimize downtime risks. Architect network infrastructure for data center and cloud environments, including switches, routers, firewalls, VPC security groups, transient gateway etc Implement high-speed interconnects and design network topologies to support scalable and resilient connectivity. Architect storage solutions (NAS/SAN, blockstore, filestore) tailored to meet performance, capacity, and data protection requirements. Optimize compute resources through virtualization/containerization technologies like VMWare ESX, Red Hat Openshift, Microsoft HyperV and Nutanix acropolis. Design fault-tolerant architectures to ensure high availability and minimize service disruptions. Develop rack layouts and configurations to maximize space utilization. Deep diving into Linux server issues and automation of configuration & deployment Documentation of systems processes and runbook. Manage data center vendor team and cable new servers, decommission old servers and manage system inventory Ensure successful execution of IT strategies, architecture guidelines, and standards and guide project teams through the technology selection and architecture/security governance processes Manage and maintain the Cloud DevOps pipeline and work with dev teams. Look for opportunities to optimize and enable consistent automated deployments. Monitor standards/policy compliance by developing and executing governance processes and tools. Provide mentoring and knowledge transfer to others, and promote open culture and DevOps. Participate in incident response and post-mortem activities to identify root causes and prevent recurrence. Proactively identify and address performance bottlenecks, reliability issues, and security vulnerabilities. Basic Qualification : 5+ years of experience in the field Experience in NAS, SAN, Block storage, File storage Experience in virtualization platforms like VMWare ESX, Red Hat Openshift, Nutanix acropolis, Microsoft HyperV. Working knowledge, networking, switching, routing, firewalls. Good understanding of Linux Expertise in Go, TypeScript, GIT, Terraform Solid understanding of monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Experience with CI/CD pipelines and DevOps practices. Hands-on experience with Public Cloud AWS/GCP Working knowledge of Kubernetes Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependants. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.
Posted 5 days ago
0.0 - 6.0 years
10 - 15 Lacs
Bengaluru, Karnataka
Remote
REQ # 5251 Mandatory Skills Java 1.8, Spring MVC, Spring Boot, React JS(atleast 3 years relevant experience), Servlets, JSP,SQL Desired skills / Secondary skills* Notice period Immediate to 7 days Relevant years experience* 5-8 Years Job Description Job Description: We are seeking a highly skilled and experienced Senior Software Engineer . The ideal candidate will have a strong background as full stack developer in Java & JEE enterprise applications development using Spring MVC, Spring Boot, React JS, Servlets, JSP, and SQL. Responsible for designing, developing, and maintaining high-quality software solutions. Key Responsibilities: Design, develop, and maintain web applications using Java, Spring MVC, Spring Boot, React JS, Servlets, and JSP. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, maintainable, and efficient code. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Help maintain code quality, organization, and automation. Participate in code reviews and provide constructive feedback. Mentor junior developers and share best practices. Work with customer to collate requirements , document requirements and render them into technical solutions . Requirements : Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience in software development and delivery . Strong proficiency in Java 1.8 and above , Spring MVC, Spring Boot, React JS and SQL. Good experience in working with Servlets, JSP, EJB is an added advantage . Proficiency in working with any of the application/web servers – Webloigic, Jboss, Tomcat Experience with front-end technologies such as HTML, CSS, and JavaScript, JQuery Familiarity with version control systems (e.g., Git). Experience with Docker and Kubernetes is added advantage Experience in Devops is an added advantage Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Ability to work independently and manage multiple tasks Share resume- swatiramnani1987@gmail.com Swati 9580296834 Job Types: Full-time, Permanent Pay: ₹1,031,647.25 - ₹1,539,086.94 per year Experience: React JS: 5 years (Required) JavaScript: 6 years (Required) Spring Boot: 6 years (Required) Spring MVC: 6 years (Required) JSP: 6 years (Required) Location: Bangalore, Karnataka (Required) Work Location: Remote Speak with the employer +91 9580296834
Posted 5 days ago
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role- VP of Engineering Experience- 8-10 years Location- Bangalore Notice Period- upto 30 days What You’ll Own: ● Hands-On Technical Leadership & Core Tech Stack Development ○ Architect and code the first scalable version of our booking portal, routing engine, mobile-first CRM, and operational dashboard. ○ Contribute directly to the codebase, setting the standard for engineering excellence and coding culture. ○ Build systems that handle India-scale logistics, real-time demo scheduling, and payment flows, with high reliability and low latency. ○ Lead backend architecture and microservices strategy using tools like Go, Node.js, Kafka, Postgres, Redis, Kubernetes, and Terraform. ○ Coordinate API strategy across frontend (React), mobile (React Native), and edge interfaces, using GraphQL and gRPC contracts. ● Full-Stack & Platform Ownership ○ Collaborate with frontend engineers on React-based interfaces; enforce design system and performance best practices. ○ Work closely with mobile engineers on React Native, helping optimize cold-start time, offline sync, and background processing. ○ Oversee API versioning, mobile/web contract integrity, and cross-platform interface stability. ○ Enable observability, tracing, and proactive alerting across the platform (Grafana, Prometheus, Sentry, etc.). ● Systems Thinking, Automation & DevOps ○ Design and implement scalable, resilient, and modular backend architectures (evolving from monolith to microservices). ○ Integrate and automate CRM, logistics, inventory, payments, and customer apps into a cohesive real-time ERP-lite system. ○ Champion CI/CD pipelines, zero-downtime deploys, infrastructure as code (Terraform), and rollback safety protocols. ○ Set and uphold engineering SLAs and SLOs (e.g., 99.9% uptime, sub-1s booking latency). ● AI-Enabled Systems & Innovation ○ Drive the integration of AI/ML into operational workflows: predictive routing, lead scoring, demand forecasting, personalized journeys. ○ Collaborate with data and product teams to deploy models using frameworks like TensorFlow, PyTorch, or OpenAI APIs. ○ Ensure infrastructure supports scalable ML workflows and retraining cycles. ● Security, Compliance & Performance ○ Implement secure coding practices and enforce API security (OAuth2, RBAC, audit logging). ○ Lead efforts around payment data protection, customer data privacy, and infra-level security (SOC 2 readiness). ○ Champion system performance tuning, cost optimization, and scalability testing (load testing, caching, indexing). ● Leadership & Cross-Functional Collaboration ○ Hire, mentor, and grow engineers across specializations: backend, frontend, mobile, data, and DevOps. ○ Foster a culture of autonomy, excellence, ownership, and rapid iteration. ○ Collaborate with Product, Design, Ops, and CX to shape roadmap, triage bugs, and ship high- impact features. Qualifications: ● Technical Depth: Proven track record of designing, building, and scaling complex software systems from scratch. Strong proficiency in at least one modern backend language (e.g., Go, Python, Node.js, Java) and experience with relevant frameworks and databases. ● Architectural Acumen: Demonstrated ability to architect scalable, fault-tolerant, and secure systems. Experience with distributed systems, microservices, message queues (Kafka, RabbitMQ), and cloud-native architectures (Kubernetes, Docker). ● Hands-on Experience: A genuine passion for coding and a willingness to be hands-on with technical challenges, debugging, and code reviews. ● AI/ML Exposure: Experience with integrating AI/ML models into production systems, understanding of data pipelines for AI, and familiarity with relevant tools/frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is highly desirable. ● Leadership & Mentorship: Experience leading and mentoring engineering teams, fostering a collaborative and high-performance environment. Ability to attract, hire, and retain top engineering talent. ● Problem-Solving: Exceptional analytical and problem-solving skills, with a pragmatic approach to delivering solutions in a fast-paced, ambiguous environment.
Posted 5 days ago
5.0 years
35 - 40 Lacs
Bengaluru, Karnataka, India
On-site
About The Opportunity Join a leading IT services and consulting firm specializing in delivering cutting-edge e-commerce, cloud, and microservices solutions for Fortune 500 clients. As part of a dedicated software engineering team, you will architect and implement high-performance Java-based microservices that power mission-critical customer experiences. This role offers the chance to collaborate with cross-functional teams to design, build, and maintain scalable backend systems handling millions of transactions daily. Role & Responsibilities Design, develop, and maintain Java-based microservices and RESTful APIs using Spring Boot to support high-traffic e-commerce platforms. Collaborate with product managers, QA engineers, and DevOps to deliver end-to-end features with a focus on performance, scalability, and maintainability. Implement database schemas and optimize queries for SQL and NoSQL data stores to ensure low-latency data access. Participate in architectural discussions and drive best practices for microservices, containerization, and cloud deployments on AWS. Develop automated tests and integrate CI/CD pipelines (Jenkins, GitLab CI) to streamline build, test, and deployment processes. Troubleshoot production issues, perform root-cause analysis, and implement robust monitoring and logging solutions with tools like ELK or Prometheus. Skills & Qualifications Must-Have 5+ years of hands-on experience in Java development with strong knowledge of Spring Boot, Spring MVC, and related frameworks. Proven experience designing and building microservices architectures, RESTful APIs, and distributed systems at scale. Expertise in SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, DynamoDB) databases, including query optimization and schema design. Solid understanding of AWS services (EC2, ECS/EKS, Lambda, RDS) and infrastructure-as-code tools such as CloudFormation or Terraform. Preferred Experience with containerization and orchestration technologies like Docker and Kubernetes in production environments. Familiarity with DevOps practices, CI/CD tooling (Jenkins, GitLab CI) and monitoring solutions (Prometheus, ELK, CloudWatch). Benefits & Culture Highlights On-site work environment fostering close-knit collaboration, mentoring, and hands-on learning. Competitive compensation package with performance-based bonuses and comprehensive health benefits. Career development programs, technical workshops, and opportunities to work on innovative projects with global impact. Interested folks can share their resume with application for SDE-2 subject line on namrata@buconsultants.co Skills: java,java backend,restful apis,distributed systems,gitlab ci,microservices,aws,ci/cd,containerization,spring boot,devops,docker,jenkins,kubernetes,sql,data structure,nosql,spring mvc,monitoring solutions
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings ! One our our client TOP MNC Giant looking for GEN AI and Machine Learning Engineer's Important Notes: Please share only those profiles who can join immediately or within 7 days. Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role : Associate and Sr Associate L1/L2 (Multiple Positions) SKILLS : Bachelor's or master’s degree in Computer Science, Data Science, Engineering, or a related field. Experience on Agentic AI/ Frameworks Strong programming skills in languages such as Python, SQL/NoSQL etc. Build analytical approach based on business requirements, then develop, train, and deploy machine learning models and AI algorithms Exposure to GEN AI models such as OpenAI, Google Gemini, Runway ML etc. Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as TensorFlow, PyTorch, Scikit-learn, OpenCV and/or Keras. Knowledge of math, probability, and statistics. Familiarity with a variety of Machine Learning, NLP, and deep learning algorithms. Exposure in developing API using Flask/Django. Good experience in cloud infrastructure such as AWS, Azure or GCP Exposure to Gen AI, Vector DB/Embeddings, LLM (Large language Model) GOOD TO HAVE : Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in Flask/Django is a bonus RESPONSIBILITIES : Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions. Develop, implement, and deploy AI/ML solutions. Preprocess and analyze large datasets to identify patterns, trends, and insights. Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability. Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc. Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed. Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in AI/ ML : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France