Home
Jobs

17627 Docker Jobs - Page 35

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

About the Role: We are seeking a highly skilled Full Stack IBM webMethods.io Developer to join our dynamic team. The role involves working on a web application cloud and on-prem backend, developed using a micro frontend and microservices architecture, and hosted on Microsoft Azure. The ideal candidate should have extensive experience in designing, developing, and deploying scalable web applications while adhering to the SAFe Agile methodology using Azure DevOps. Key Responsibilities: Collaborate with cross-functional teams to design, develop, and maintain web applications using IBM webMethods.io products. Architect and implement micro frontend and microservices -based solutions for high scalability and maintainability. Develop and maintain Azure-hosted solutions, ensuring high availability and security. Participate in SAFe Agile ceremonies including PI planning, daily stand-ups, and retrospectives. Utilize Azure DevOps for CI/CD pipeline setup, version control, and automated deployments. Perform code reviews, ensure coding standards, and mentor junior developers. Troubleshoot and resolve complex technical issues across frontend and backend systems. Primary Skills: IBM webmethod.io APIGW, B2B, DevPortal, Integration and E2E monitoring IBM webMethods Integration server Responsive and adaptive design techniques. Linux OS Cloud: Microsoft Azure App Services, Azure Functions, Azure Storage, Azure SQL, Azure Key Vault. Azure Cosmos DB DevOps & CI/CD: Azure DevOps Pipelines, Repos, Boards. Architecture & Design Patterns: Micro Frontend Architecture. Microservices Architecture. Secondary Skills: Testing: Unit Testing (xUnit, NUnit) and Integration Testing. Frontend Testing (Jasmine, Karma). API Management: RESTful API design and development. API Gateway, OAuth, OpenAPI/Swagger. Security & Performance: Azure Security best practices. Application performance optimization and monitoring. Methodologies: SAFe Agile Framework Familiarity with PI Planning, Iterations, and Agile ceremonies. Tools & Collaboration: Collaboration tools like Microsoft Teams. Qualifications: Bachelors degree in Computer Science, Information Technology, or related field. Proven experience in micro frontend and microservices architecture. Strong understanding of cloud-native application design, especially on Azure. Excellent problem-solving skills and the ability to lead technical discussions. Nice to Have: Exposure to containerization technologies (Docker, Kubernetes). Knowledge of Azure API Management and Azure Active Directory (AAD). Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams.

Posted 1 day ago

Apply

7.0 - 10.0 years

9 - 19 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Description for Sr. DevOps Engineer: We are seeking a highly skilled Senior DevSecOps/DevOps Engineer with extensive experience in cloud infrastructure, automation, and security best practices. The ideal candidate must have 7+ years of overall experience, with at least 3+ years of direct, hands-on Kubernetes management experience. The candidate must have strong expertise in building, managing, and optimizing Jenkins pipelines for CI/CD workflows, with a focus on incorporating DevSecOps practices into the pipeline. Key Responsibilities: Design, deploy, and maintain Kubernetes clusters in cloud and/or on-premises environments. Build and maintain Jenkins pipelines for CI/CD, ensuring secure, automated, and efficient delivery processes. Integrate security checks (static code analysis, image scanning, etc.) directly into Jenkins pipelines. Manage Infrastructure as Code ( IaC ) using Terraform , Helm , and similar tools. Develop, maintain, and secure containerized applications using Docker and Kubernetes best practices. Implement monitoring, logging, and alerting using Prometheus , Grafana , and the ELK/EFK stack . Implement Kubernetes security practices including RBAC , network policies , and secrets management . Lead incident response efforts, root cause analysis, and system hardening initiatives. Collaborate with developers and security teams to embed security early in the development lifecycle (Shift-Left Security). Research, recommend, and implement best practices for DevSecOps and Kubernetes operations. Required Skills and Qualifications: 7+ years of experience in DevOps, Site Reliability Engineering, or Platform Engineering roles. 3+ years of hands-on Kubernetes experience, including cluster provisioning, scaling, and troubleshooting. Strong expertise in creating, optimizing, and managing Jenkins pipelines for end-to-end CI/CD. Experience in containerization and orchestration: Docker and Kubernetes . Solid experience with Terraform , Helm , and other IaC tools. Experience securing Kubernetes clusters, containers, and cloud-native applications. Scripting proficiency ( Bash , Python , or Golang preferred). Knowledge of service meshes (Istio, Linkerd) and Kubernetes ingress management. Hands-on experience with security scanning tools (e.g., Trivy , Anchore , Aqua , SonarQube ) integrated into Jenkins. Strong understanding of IAM , RBAC , and secret management systems like Vault or AWS Secrets Manager .

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role summary: Reporting to the Lead Service Reliability Engineer, the Service Reliability Engineer is part of an enablement team that provides expertise and support to specialist teams designing, developing and running customer-facing products as well as internal systems. The Service Reliability Engineer will on a day-to-day basis be responsible for the Observability of Creditsafe’s Technology estate and will be involved in the monitoring and escalation of events. A large part of the role will involve improving the monitoring system and processes. This role will involve integrating AI capabilities to reduce noise and improve incident mean time to repair. Role objectives: Ensure our products are ready for life in Production Embed reliability, observability, and supportability as features, across the lifecycle of solution development Help to guide our engineering team’s transformation Raise the bar for engineering quality Deliver higher service availability Improve Creditsafe’s Monitoring Capabilities utilizing AI technologies Personal qualities: Trustworthy and quick thinking Optimistic & Resilient; breed positivity and don’t give up on the “right thing” Leadership & Negotiation; sell not tell, build support and consensus Creativity and High standards; develop imaginative solutions without cutting corners Fully rounded; experience of dev, support, security, ops, architecture and sales As a Service Reliability Engineer, you should have: A track record of troubleshooting and resolving issues in live production environments and implementing strategies to eliminate them Experience in a technical operations support role Demonstratable knowledge of AWS CloudWatch – Creating dashboards, metrics, and log analytics Knowledge of one or more high-level programming languages such as Python, Node, C# and Shell scripting experience. Proactive Monitoring and Alert Validation - Monitor critical infrastructure and services; validate alerts by analyzing logs, performance metrics, and historical data to reduce false positives. Incident Response and Troubleshooting - Perform troubleshooting; escalate unresolved issues to appropriate technical teams; actively participate in incident management and communication. Knowledge of AI/ML frameworks and tools for building operational intelligence solutions and automating repetitive SRE tasks. Continuous Improvement – Improvement of monitoring solutions, reduction of alert noise and implementation of AI technologies: AI/ML experience in operations, including predictive analytics for system health, automated root cause analysis, intelligent alert correlation to reduce noise and false positives, and hands-on experience with AI-powered monitoring solutions for anomaly detection and automated incident response. Strong ability and enthusiasm to learn new technologies in a short time particularly emerging AI/ML technologies in the DevOps, Platform and SRE space. Proficient in container-based environments including Docker and Amazon ECS. Experience of automating infrastructure using “as code” tooling. Strong OS skills, Windows and Linux. Understanding of relational and NoSQL databases. Experience in a hybrid cloud-based infrastructure. Understanding of infrastructure services including DNS, DHCP, LDAP, virtualization, server monitoring, cloud services (Azure and AWS). Knowledge of continuous integration and continuous delivery, testing methodologies, TDD and agile development methodologies Experience using CI/CD technologies such as Terraform and Azure Dev Ops Pipel Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

9 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a skilled and motivated Java + GoLang Developer with over 3 years of hands-on experience in backend development. The ideal candidate should have a strong academic background, a solid understanding of software engineering principles, and excellent communication skills. You will work closely with cross-functional teams to build, optimize, and maintain scalable backend systems. Key Responsibilities: Design, develop, and maintain backend services using Java and GoLang. Build RESTful APIs and integrate third-party services. Optimize performance and scalability of distributed systems. Collaborate with frontend developers, DevOps, and QA teams to ensure seamless integration. Write clean, modular, testable code and maintain high code quality. Participate in code reviews and mentor junior developers. Contribute to architecture decisions and improve development processes. REQUIRED SKILLS : 3+ years of hands-on experience in backend development using Java (Spring Boot) and GoLang. Solid understanding of OOP, design patterns, and microservices architecture. Experience with REST APIs, JSON, PostgreSQL/MySQL, and message queues (e.g., Kafka, RabbitMQ). Familiarity with Docker, Kubernetes, and CI/CD pipelines. Experience with cloud platforms (AWS/GCP/Azure) is a plus. Strong debugging and problem-solving skills. Experience with API security (OAuth2, JWT). Familiarity with logging, monitoring, and observability tools. Exposure to Agile/Scrum development methodologies.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Qualcomm India Private Limited Job Area Information Technology Group, Information Technology Group > Systems Analysis General Summary This Individual will need to work closely with SA/Dev/QA team members to understand requirements, prepare test cases, test web applications, and identify/report defects in functionality. The candidate should have good communication, analytical, and problem-solving skills to help support the development process, and to ensure that project deliverables are met according to specifications. Candidate must have good database testing knowledge. Candidate must have a good test automation experience with Java, Selenium / Playwright, Java / C#, Docker/Kuberneties etc This person must be able to work with other team members around the globe (US, India, etc...), to provide required support. Minimum Qualifications 3+ years of IT-relevant work experience with a Bachelor's degree. OR 5+ years of IT-relevant work experience without a Bachelor’s degree. Minimum 4 years of testing web application including database testing. 'Minimum 3 years of test automation experience with Java, Selenium, testNg, maven, LoadRunner, Jmeter 5-8 years QA/testing experience. Candidate should have good manual testing and automation testing experience. Person with prior experince in AEM (Adobe Experience Manager) domain testing would be plus Bachelors / Masters degree in any stream Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3076559 Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. You are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt, take ownership and consistently deliver quality work that drives value for our clients and success as a team. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Azure Operation Analyst (Associate/Senior Associate) Demonstrates thorough abilities and/or a proven record of success as a team leader : Managing and support the Dev to Production cloud PaaS and platform, to establish quality, performance, and availability of hosted services; Providing guidance and support for cloud technology practitioners (Application Development team); Providing Implementation and Run & Maintain services; Working on high volume mission critical systems; Providing on call support for Production cloud Environments; Working hands-on with customers to develop, migrate, and debug services issues; Providing updated server/process documentation and as appropriate, creating documentation where none may exist; Focusing on rapid identification and resolution of customer issues; Answering questions and perform initial triage on problem reports; Providing first/second level cloud environment support; Working very closely with application users to troubleshoot and resolve cloud hosted applications or system issues; Informing Technical Support Management about any escalations or difficult situations that require his/her involvement; Providing Cloud customers with an industry leading customer experience when engaging Technical Support; Assisting in Tier 2 and 3 triage, troubleshooting, remediation, and escalation of tickets tied to the product support function; Training and supporting junior team members in resolving product support tickets; Proactively identifying ways to optimize the product support function; Coordinating to establish and manage clear escalation guidelines for supported system components; Running database queries to lookup, resolve, issues; Demonstrating proven communication and collaboration skills to coordinate with developers and application team to negotiate and schedule patching windows; Demonstrating experience in managing the monthly Windows or Linux environment patching. Must Have Qualifications Hands-on experience with Azure Web apps, App Insights, App Service Plan, App Gateway, API Management, Azure Monitor, KQL queries and other troubleshooting skills for all Azure PaaS & IaaS Services. Proven verbal and written communication skills, which will be key in driving customer communication during critical events Demonstrating proficiencies in at least one of the technology domains Networking Principles, System Administration, DevOps, Configuration Management and Continuous Integration Technologies (Chef, Puppet, Docker, Jenkins) Proven understanding of ITIL framework Good To Have Qualifications Interest in information security and a desire to learn techniques and technologies such as application security, cryptography, threat modeling, penetration testing Show more Show less

Posted 1 day ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

Summary Penguin Computing is seeking a software engineer with a background in Software Automation to join our Software group. Penguin Computing's Scyld Software products are used in the deployment, provisioning, management, and monitoring of some of the largest computational systems in the world. In this role, you will collaborate closely with Technical Architects, Software Engineers, Product Owners and Managers, and Services engineering teams to develop a new product that delivers Software Automation capabilities and all phases of Infrastructure Management to end customers, particularly in AI space. We intend to take Infrastructure-as-code principles to their fullest potential. As part of a talented and high-performing agile team, you will have the opportunity to make lasting impacts on our software and our customers. The ideal candidate has an excellent understanding of computer infrastructure lifecycle from bare metal through to fully operational and ready for users. You will understand the challenges faced by scaling complex systems and networks. You will be a creative thinker willing to be experimental but always maintaining the highest engineering rigor. The team is distributed; we are looking for team members who perform well given a high degree of independence and autonomy and can communicate effectively asynchronously. Essential Duties and Responsibilities Solid command on any of the programming languages like Java, Python, C, C++. Create, maintain, and improve Ansible playbooks and other code that manage Linux-based high-performance computer (HPC) and artificial intelligence (AI) environments Write well-formulated, highly readable code and support tests and documentation Participate in team workflow: stand-ups, code reviews, design discussions, research and report backs Evaluate new business requirements and write technical specifications Work within the team on continuous improvement: mentoring junior engineers, knowledge-sharing, and improving our internal processes Partner with field engineers on troubleshooting and remediation Keep abreast of developments on the Infrastructure Management frontier. Job Knowledge, Skills, and Abilities Bachelors degree in computer science/engineering or similar discipline or equivalent experience Deep understanding & experience in Software Automation Experience with bare metal provisioning: PXE and kickstart Experience with monitoring tools and strategies Excellent understanding of Linux-based systems including system administration Deep understanding and experience with configuration management tooling and processes like Ansible Solid coding skills including at least one scripting language and solid understanding of data structures Experience with Git and CI/CD tooling and practices Knowledge of Security best practices and technologies Knowledge of Nvidia GPU ecosystem (architecture, drivers, etc) Practical knowledge of HPC technologies including cluster management and stack Ability to communicate technical designs and concepts clearly and effectively Understanding of network technologies, architectures, and protocols Experience with virtualization architecture and platforms is preferred Experience with container-based software deployment and orchestration using Kubernetes.

Posted 1 day ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Technical Skills: Strong proficiency in .Net, .Net Core, C#, REST API. Strong expertise in PostgreSQL. Additional preferred Skills: Docker, Kubernetes. Cloud : GCP and Services: Google Cloud Storage, Pub/Sub. Monitor Tools: Dynatrace, Grafana, API Security and tooling (SonarQube) Key Responsibilities: Design, develop and maintain scalable C# applications and microservice implementation. Implement RESTful APIs for efficient communication between client and server applications. Collaborate with product owners to understand requirements and create technical specifications. Build robust database solutions and ensure efficient data retrieval. Write clean, maintainable, and efficient code. Conduct unit testing, integration testing, and code reviews to maintain code quality. Work on implementation of Industry Standard protocols related to API Security including OAuth. Implement scalable and high-performance solutions that integrate with Pub/Sub messaging systems and other GCP services BQ, Dataflows, Spanner etc. Collaborate with cross-functional teams to define, design, and deliver new features. Integrate and manage data flows between different systems using Kafka, Pub/Sub, and other middleware technologies. Qualifications: Bachelors Degree or International equivalent 8+ years of IT experience in .NET. Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 1 day ago

Apply

6.0 - 11.0 years

25 - 40 Lacs

Hyderabad, Gurugram, Bengaluru

Hybrid

Naukri logo

Job Title: Senior Backend Engineer Core Java s Microservices (Multiple Positions) Overview: We are hiring for multiple backend engineering roles. Candidates must demonstrate strong capabilities in either Core Java backend engineering or Microservices and Cloud architecture, with working knowledge in the other . Candidates with strengths in both areas will be considered for senior roles. You will be part of a high-performance engineering team solving complex business problems through robust, scalable, and high-throughput systems. Core Technical Requirements Candidates must demonstrate strength in either of the following areas, with working knowledge of the other. Stronger capabilities in both will be considered for senior roles. Java s Backend Engineering Java 8+ (Streams, Lambdas, Functional Interfaces, Optionals) Spring Core, Spring Boot, object-oriented principles, exception handling, immutability Multithreading (Executor framework, locks, concurrency utilities) Collections, data structures, algorithms, time/space complexity Kafka (producer/consumer, schema, error handling, observability) JPA, RDBMS/NoSQL, joins, indexing, data modeling, sharding, CDC JVM tuning, GC configuration, profiling, dump analysis Design patterns (GoF creational, structural, behavioral) Microservices, Cloud s Distributed Systems REST APIs, OpenAPI/Swagger, request/response handling, API design best practices Spring Boot, Spring Cloud, Spring Reactive Kafka Streams, CQRS, materialized views, event-driven patterns GraphQL (Apollo/Spring Boot), schema federation, resolvers, caching Cloud-native apps on AWS (Lambda, IAM, S3, containers) API security (OAuth 2.0, JWT, Keycloak, API Gateway configuration) CI/CD pipelines, Docker, Kubernetes, Terraform Observability with ELK, Prometheus, Grafana, Jaeger, Kiali + Additional Skills (Nice to Have) Node.js, React, Angular, Golang, Python, GenAI Web platforms: AEM, Sitecore Production support, rollbacks, canary deployments TDD, mocking, Postman, security/performance test automation Architecture artifacts: logical/sequence views, layering, solution detailing Key Responsibilities Design and develop scalable backend systems using Java and Spring Boot Build event-driven microservices and cloud-native APIs Implement secure, observable, and high-performance solutions Collaborate with teams to define architecture, patterns, and standards Contribute to solution design, code reviews, and production readiness Troubleshoot, optimize, and monitor distributed systems in production Mentor junior engineers (for senior roles)

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

As a Senior Engineer - Software Development , you will play a crucial role in designing, developing, and implementing high-performance software solutions for GreyOrange’s robotic systems. You will collaborate closely with product managers, designers, and other engineers to deliver robust, scalable, and high-quality software that drives our automation solutions. Responsibilities Code critical sections and key features of the product. Lead and solve key technical challenges in the overall system. Work in collaboration with architects to write low-level design documents and to create a technical roadmap. Rearchitect existing algorithms & implementations. Work with simulations for functional performance. Perform code reviews and healthy peer feedback to the team. Mentor and guide team members technically. Observe and evangelize best technical practices. Must Have 3+ years of work experience, having demonstrated problem solving skills. Have experience designing and implementing non-trivial software systems (e.g., using multiple processes/threads/IPC etc.) Development experience using Java, Python, Golang, Erlang (either of these) Experience working on any micro service platform. Scalability Architecture Experience working on REST based API integration. Good RDBMS skills and experience in DB/SQL Good understanding of design patterns, object-oriented design, and frameworks. Experience in Algorithmic development. Good understanding of version control system Qualification Education: Bachelor’s or master’s degree in computer science, Software Engineering, or a related field from a premier institute. Technical Skills: Proficiency in one or more programming languages such as Java, C++, Python, C#. Experience with frameworks and libraries relevant to the technology stack. Problem-Solving: Strong analytical and troubleshooting skills. Ability to diagnose and resolve complex technical issues. Communication: Excellent verbal and written communication skills. Ability to convey technical information to nontechnical stakeholders. Teamwork: Ability to work effectively in a team environment. Strong interpersonal skills and the ability to collaborate with colleagues at all levels. Good to Have Exposure to serverless technologies Exposure to various databases and associated technologies like Postgre sql, Redis etc. Knowledge Docker, Kubernetes and cloud-based deployment environment (AWS,GCP, Azure Cloud etc.). Knowledge of developing scripts in Python, Shell etc. Knowledge of working with Time Series databases (Influx etc.) Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Hiring for a Product based company What you will bring Bachelor’s degree in Computer Science 2+ years experience in developing software in an Agile environment 2+ years of Java experience Relational database design & development on MySQL Experience with web services in Java using Spring Framework, JAX-RS HTML / CSS / Angular / TypeScript / Javascript AWS environment development (S3, Lambda, EC2, Docker) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need at least five years of experience in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of Core Java, Spring boot, Microservices and Rest Webservices along with good understanding of database such as Oracle, Postgres and SQL Knowledge of Docker, Kubernetes, Maven, version management and collaboration tools like Git, GitLab, Jira and Confluence Understanding of cloud platform such as GCP, AWS and Azure A background in solving highly complex, analytical and numerical problems Show more Show less

Posted 1 day ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

Bengaluru

Work from Office

Naukri logo

Node.js Developer 5-7 YEARS EXP: We are looking for a highly skilled Backend Developer with expertise in Node.js , RabbitMQ , Redis , and Docker to build and maintain robust and scalable asynchronous processing systems. You will be responsible for designing queue-based architecture, optimizing real-time job processing pipelines, and managing containerized deployments. Key Responsibilities: Design, develop, and maintain microservices and background workers using Node.js Build scalable and reliable message queuing systems using RabbitMQ Integrate Redis for job deduplication, caching, and fast state management Develop and manage Docker containers for service deployment and orchestration Ensure smooth communication between services using event-driven architecture Implement retry, back-off, dead-letter queue, and message acknowledgment strategies Collaborate with DevOps to set up monitoring, scaling, and auto-recovery Optimize system performance for high-throughput asynchronous processing Write unit and integration tests for queue handling logic. Required Skills: Strong proficiency in Node.js (ES6+, async/await, streams) Solid experience with RabbitMQ (message publishing, consuming, DLQs, exchange types) Experience with Redis (caching, pub/sub, TTL, key expiration strategies) Knowledge of Docker , containerization, and multi-service orchestration using docker-compose Familiarity with message acknowledgment patterns and queue durability Experience in building scalable, fault-tolerant distributed systems Proficiency in REST APIs, JSON, and event-driven architecture Good to have: Knowledge of Kubernetes or Docker Swarm Experience with logging and monitoring tools (e.g., Kibana) Familiarity with CI/CD pipelines Basic knowledge of MongoDB, MySQL Exposure to cloud platforms (AWS, GCP, Azure)

Posted 1 day ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Solutions Architect / Technical Manager Location: Noida (preferred), Mumbai Job Type: Full-Time Experience Level: 8+ years (including at least 3 years in an architecture or technical leadership role) Role Overview: We are seeking an experienced and dynamic Solutions Architect to join our growing team. The ideal candidate will have hands-on experience with complex software projects, a strong understanding of architectural design, and a demonstrated ability to lead teams from concept through successful delivery. This is a client-facing, technically strategic role that blends deep technical acumen with excellent communication and leadership skills. Key Responsibilities:  Engage with clients to understand their business objectives, technical landscape, and pain points.  Architect end-to-end software solutions tailored to client needs, considering scalability, maintainability, performance, and security.  Lead the technical discovery, solution proposal, and estimation processes during pre- sales and client engagement phases.  Provide architectural oversight and guide development teams in delivering high- quality solutions on time.  Drive implementation of best practices related to coding, testing, deployment, and system design.  Translate business requirements into technical specifications and documentation.  Ensure Non-Functional Requirements (NFRs) such as performance, availability, and security are well incorporated into solutions.  Promote Agile/Scrum processes and participate in project planning, sprint reviews, and retrospectives.  Maintain a strong understanding of deployment strategies (blue-green, canary, rolling updates, etc.).  Continuously assess emerging technologies and trends to incorporate into solution designs and team capabilities. Required Skills:  Proven track record of leading multiple complex software projects through to successful delivery.  Deep understanding of Software Development Life Cycle (SDLC), Agile/Scrum methodologies, and engineering best practices.  Strong expertise in system architecture, microservices, cloud-native design, and system integration.  Proficiency in one or more programming languages (e.g., Java, Python, Node.js) and frontend frameworks (e.g., React, Angular, Vue).  Strong knowledge of design patterns, data structures, and algorithms.  Hands-on experience with major cloud platforms (AWS, Azure, GCP) and deployment architectures.  Experience in product development life cycles from idea to production.  Familiarity with DevOps tooling, CI/CD pipelines, containerization (Docker, Kubernetes), and infrastructure as code.  Exposure to data engineering, AI/ML, or enterprise integrations.  Excellent communication and interpersonal skills for working with both technical and non-technical stakeholders.  A self-learner with a growth mindset and passion for staying up to date with emerging technologies, patterns, and tools. Qualifications:  Bachelor’s or master’s degree in computer science, Engineering, or a related field.  Certifications in solution architecture or system design, such as: o AWS Certified Solutions Architect – Associate/Professional o Microsoft Certified: Azure Solutions Architect Expert o TOGAF, Zachman Framework, or other enterprise architecture certifications Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Years of exp : 10 - 15 yrs Location : Noida Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. Ensure version control, repeatability, and compliance across all infrastructure components. Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less

Posted 1 day ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary Supports enterprise information systems housed in highly automated and secure centralized data centers, large computer rooms, corporate facilities, and other company locations. Ensures a stable operating environment and maximum use of system facilities. Develops effective relationships with business stakeholders and various end users. Ensures required IT services are identified, developed, and supported to the satisfaction of the stakeholders. Establishes and maintains service level agreements and is the main point of contact for all service issues. Performs technical, analytical, or maintenance work. Typically, knowledge is gained through a combination of formal education in a vocational or technical degree program and on-the-job training. Requires full proficiency in a range of technical, analytical, or scientific processes or procedures through training and considerable on-the-job experience. Completes a variety of atypical assignments. Works within defined technical processes and procedures or methodologies and may help determine the appropriate approach for new assignments. Works with a limited degree of supervision, with oversight focused only on complex new assignments. Acts as an informal resource for colleagues with less experience. Qualification: 12+ years of experience Understanding of the IT infrastructure and its relationship to the operation Bachelor's degree in computer science, Information Systems, or equivalent preferred Primary Skills: Expert level with Server Administration Networking Linux Administration Windows Server Administration SQL Server Administration Advanced process optimization and debugging (e.g., gdb, valgrind, dmesg) Designing scalable system architecture (load balancing, microservices, containerization) Infrastructure as Code (IaC) tools (Terraform, Ansible)Cloud-native Linux administration (AWS, Azure, GCP) Security hardening & compliance (e.g., iptables, patching, user access control) Kernel tuning and low-level diagnostics (sysctl, kernel modules) Technical leadership and mentoring in Linux best practices Automation and CI/CD integration (Puppet, Helm, GitLab CI, Docker, kubectl) Strong knowledge of PC hardware and server architecture and networking Excellent documentation skills Excellent troubleshooting and analytical skills Excellent process management skills Proficient in Microsoft Office Secondary Skills: Basic knowledge of clustering technologies Willingness to learn new technologies Minimal supervision required

Posted 1 day ago

Apply

4.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Golang Developer Experience: 4+ Years Location: Onsite - Delhi Important Point: A Backend Developer, who is proficient in Java/Golang/C++/Python, has at least 2 years of experience in Golang, and is interested in working in Golang. Someone proficient in Docker & Kubernetes Bonus Points- Working knowledge of SQL database for a product at scale and AWS. Key Responsibilities: Minimum 4+ years of work experience in backend development & building scalable products Work on category-creating, possibly disruptive, fintech products in early stages. Design and develop highly scalable and reliable systems end-to-end. Directly work with the CTO and Founders, and be a part of product strategy. Self-Starter; Will be working alongside other engineers and developers, collaborating on the various layers of the infrastructure for our products built in-house. Someone proficient in Docker & Kubernetes Expert knowledge of computer science, with strong competencies in data structures, algorithms, and software design. Familiarity with Agile development, continuous integration, and modern testing methodologies. Familiarity in working with REST APIs Think out of the box while solving problems, considering the scale and changing environment variables. Quickly learn and contribute to changing technical stack ∙ Interest (and/or experience) in the financial/stock market space - interest trumps experience! Show more Show less

Posted 1 day ago

Apply

2.0 - 4.0 years

7 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: DevOps Lead Location: Noida (preferred), Mumbai Job Type: Full-Time Experience Level: 8+ years (with 2–3 years in a DevOps leadership role) Role Overview: We are looking for an experienced and proactive DevOps Lead to take ownership of our cloud-based infrastructure. This role is critical in shaping and maintaining scalable, secure, and high-performance environments for our applications. The ideal candidate will have deep technical expertise across cloud platforms, strong leadership skills, and a passion for automation, cost optimization, and DevOps best practices. You will work closely with development teams to define deployment architectures, implement robust CI/CD pipelines, and lead cloud transformation and migration initiatives. You will develop, and lead, a team of DevOps engineers and be a key contributor in client- facing technical pre-sales conversations. Key Responsibilities:  Own and manage all aspects of cloud infrastructure across multiple environments (development, staging, production).  Collaborate with development and architecture teams to define and implement deployment architectures.  Design and maintain CI/CD pipelines, infrastructure-as-code (IaC), and container orchestration platforms (e.g., Kubernetes).  Lead cloud transformation and migration projects, from planning and solutioning to execution and support.  Set up and enforce DevOps best practices, security standards, and operational procedures.  Monitor infrastructure health, application performance, and resource utilization, with a focus on automation and optimization.  Identify and implement cost-saving measures across cloud and infrastructure setups.  Manage and mentor a team of DevOps engineers, driving continuous improvement and technical growth.  Collaborate with business and technical stakeholders in pre-sales engagements, providing input on infrastructure design, scalability, and cost estimation.  Maintain up-to-date documentation and ensure knowledge sharing within the team. Required Skills:  8+ years of experience in DevOps, cloud infrastructure, and system operations.  2+ years in a leadership or team management role within DevOps or SRE.  Exposure to DevSecOps practices and tools.  Experience with multi-cloud or hybrid cloud environments.  Strong experience with cloud platforms (AWS, Azure, or GCP) and cloud-native architectures.  Proven track record in cloud migration and cloud transformation projects.  Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Pulumi.  Deep knowledge of CI/CD tools (e.g., GitLab CI, Jenkins, GitHub Actions) and configuration management tools (e.g., Ansible, Chef).  Hands-on experience with containerization (Docker) and orchestration (Kubernetes, ECS).  Strong scripting and automation skills (e.g., Python, Bash, PowerShell).  Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK, CloudWatch).  Deep understanding of security, compliance, and cost optimization in cloud environments.  Excellent communication and documentation skills.  Experience in supporting technical pre-sales and creating infrastructure proposals or estimations.  Ability to work in client-facing roles, including technical discussions and solution presentations. Qualifications:  Bachelor’s or master’s degree in computer science, Engineering, or a related field.  Cloud certifications such as: o AWS Certified DevOps Engineer or Solutions Architect o Microsoft Certified: DevOps Engineer Expert o Google Cloud DevOps Engineer Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Greater Nashik Area

On-site

Linkedin logo

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Manager - Analytics Purpose of the role Contributing to the Data Science efforts of AB InBevʼs global non-commercial analytics capability of Procurement Analytics. Candidate will be required to contribute and may also need to guide the DS team staffed on the area and assess the efforts required to scale and standardize the use of Data Science across multiple ABI markets KEY TASKS AND ACCOUNTABILITIES Understand the business problem and translate that to an analytical problem; participate in the solution design process. Manage the full AI/ML lifecycle, including data preprocessing, feature engineering, model training, validation, deployment, and monitoring. Develop reusable and modular Python code adhering to OOP (Object-Oriented Programming) principles. Design, develop, and deploy machine learning models into production environments on Azure. Collaborate with data scientists, software engineers, and other stakeholders to meet business needs. Ability to communicate findings clearly to both technical and business stakeholders. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) B.Tech /BE/ Masters in CS/IS/AI/ML Previous Work Experience Required Minimum 3 years of relevant experience Technical Skills Required Must Have Strong expertise in Python, including advanced knowledge of OOP concepts. Exposure to AI/ML methodologies with a previous hands-on experience in ML concepts like forecasting, clustering, regression, classification, optimization, deep learning , NLP using Python Solid understanding of GenAI concepts and experience in Prompt Engineering and RAG Experience with version control tools such as Git. Consistently display an intent for problem solving Strong communication skills (vocal and written) Ability to effectively communicate and present information at various levels of an organization Good To Have Preferred industry exposure in CPG and experience of working in the domain of Procurement Analytics Product building experience would be a plus Familiarity with Azure Tech Stack, Databricks, ML Flow in any cloud platform Experience with Airflow for orchestrating and automating workflows Familiarity with MLOPS and containerization tools like Docker would be plus. Other Skills Required Passion for solving problems using data Detail oriented, analytical and inquisitive Ability to learn on the go Ability to work independently and with others We dream big to create future with more cheers Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

JD: Job Title: Devops engineer Experience: 5 Years Location: Pune Lead time : Immediate to 15 days Skills/ Experience Required: Minimum of 5 years of Hands-on experience working as a DevOps/Build/Deployment and Engineering & Operations skills Primary Tech Stack: GitLab Pipelines/Github Actions/Jenkins, Gitlab/Github, Terraform, Helm, AWS, Oracle DB Experience with Agile/Scrum, Continuous Integration, Continuous Delivery, and related tools Hands-on experience in production environments, both deploying and troubleshooting applications in Linux environment. Strong experience automating with scripting languages such as Bash, Python, Groovy and any deployment scripting languages Strong experience with CI/CD deployment supporting Java technologies (Jenkins, Nexus, Apache, JBoss, Tomcat) Highly Proficient in Configuration Management (Ansible, Chef or Similar) Hands-on experience with Containerization, Docker & Kubernetes is required. Good understanding of Micro-services architecture, design patterns, and standard methodologies Good understanding of networking, load balancing, caching, security, config and certificate management. Nice to Have Skills Experience with some of key AWS services (IAM, VPC, Lambda, EKS, MSK, Keyspace, Codepipeline) Experience with Java ecosystem (Maven, Ant, Tomcat, JBoss) Experience with Node ecosystem (JavaScript, Angular, Npm, JQuery.) Understanding of SOA and distributed computing Experience with Test Driven Development (TDD) practices with an automated testing framework Experience with Istio Experience SQL Server Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position As a Data Scientist you will join the data science cluster in the Roche Informatics Data and Analytics Chapter (DnA). You will be part of one or several multi-disciplinary agile teams where you’ll actively shape the future of healthcare by using data science methods and principles to generate deeper insights from a great variety of data sources. To achieve this, you will proactively identify needs, design and implement analytical solutions, provide advice and consulting support to our key stakeholders and show impact by executing proof-of-value initiatives, or contributing to existing products. As a Data Scientist you will: Apply your expertise in NLP/LLM to develop and refine models that address Roche business needs. Involved in building and fine-tuning models and optimising their performance to provide valuable insights and solutions to business stakeholders Support prioritisation efforts, understand feasibility and business impact, take smart risks to make informed decisions in a fast-paced, evolving environment to deliver patient benefits faster Collaborate within global agile teams in the Roche Informatics business and foundational domains to develop products that provide the highest value to both Roche Pharma and Diagnostics business stakeholders. Provide methodical and implementation guidance as well as hands-on support around analytical LLM/NLP use cases. Evaluate the pros & cons of different NLP approaches and Generative AI platforms with comprehensive quantitative and qualitative analysis Communicate findings and market the value of use cases to key stakeholders Contribute to positioning data science as a key competency within the enterprise Continuously look for opportunities to broaden knowledge, capabilities and skill set to enable talent to flow into different specialties Be a role model for knowledge sharing within the DnA chapter. Act as a coach, mentor, or buddy to help colleagues grow and develop Qualifications M.Sc. or PhD in Computer Science, Physics, Statistics, Mathematics or equivalent degree and experience with machine learning/data mining/artificial intelligence. Experience of working as a hands-on data scientist in pharmaceutical industry is preferred Hands-on experience with Python programming and common NLP libraries (e.g., transformers, gensim, spaCy, etc.) Familiarity with essential frameworks (e.g. PyTorch) and infrastructure components (Docker, GPU) for training, fine-tuning and evaluating NLP tasks Experience in using both open source (e.g. HuggingFace) and closed source LLM models with different deep learning architectures Experience implementing RAG, working with knowledge databases and using LLM through APIs Good knowledge of effective training and optimising language models to fit for internal infrastructure and ensure seamless integration Familiarity with best practices for code generation, code documentation, data security, and compliance in cloud-based data science workflows Proven experience to add value and insight by providing advanced analytical solutions Data storytelling skills and using visualisation tools to communicate data and results with a non-technical audience International, goal oriented mindset with can do attitude Fluency in written and spoken English Who we are A healthier future drives us to innovate. Together, more than 100’000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Let’s build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less

Posted 1 day ago

Apply

12.0 - 22.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com ASAP. Kindly share the below details. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate-? Please let me know, if any of your friends are looking for a job change. Kindly share the references. Please Note: WFO-Work From Office (No hybrid or Work From Home) Shift Details: IST Shift (02:00 PM to 11:00 PM)-1 way free cab facility(drop)+food Job Summary: Roles and Responsibilities: - Defining and taking ownership for all aspects of technology solutions to align with business program priorities Owning the quality of technology deliverables for given products/programs Mentoring and growing technology team members Directly supporting agile development teams in defining and ensuring compliance with architecture/design standards and guidelines Working to continuously improve technology craft patterns, practices, tools and methods Continuing to - sharpen the blade- of technology patterns and techniques based on understanding changing trends Key Attributes: - Strong software architecture skills and knowledge, with a proven track record in delivering complex software systems Focus on contemporary architecture patterns and practice (e.g. experience with micro-services, REST, responsive design, SQL and NO-SQL, front-end technologies, DevOps) Understanding of the full end-to-end technology stack (i.e. front-end client to database, and application to infrastructure) Ability to communicate effectively and to maintain meaningful relationships with business and technology stakeholders Focus on being sufficiently hands-on, pragmatic and willing to step in to code reviews and implementation design discussions - Ability to think strategically and deliver tactically - be a big picture thinker with the ability to jump in and help deliver on the vision Be able to inspire, lead and mentor Solid understanding of agile methods, tools, and experience in an agile environment Good to have Java (v8+), Spring Framework (v4+), AWS, Azure, Git & CI/CD process, Networking Protocols. Desired: JavaScript, Angular (v8+), Node.js, Python, Shell scripting, Unix, Linux, Windows, ELK Stack, Kafka, JMS, message queuing, Packer, Terraform, BDD and TDD Experience in migration cobol to Java. Thanks and Regards, Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com

Posted 1 day ago

Apply

Exploring Docker Jobs in India

Docker technology has gained immense popularity in the IT industry, and job opportunities for professionals skilled in Docker are on the rise in India. Companies are increasingly adopting containerization to streamline their development and deployment processes, creating a high demand for Docker experts in the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

These cities are known for their vibrant tech scene and host a large number of companies actively seeking Docker professionals.

Average Salary Range

The salary range for Docker professionals in India varies based on experience levels. Entry-level positions may start at around ₹4-6 lakhs per annum, while experienced Docker engineers can earn upwards of ₹15-20 lakhs per annum.

Career Path

In the Docker job market, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving into roles like Tech Lead or DevOps Engineer as one gains more experience and expertise in Docker technology.

Related Skills

In addition to Docker expertise, professionals in this field are often expected to have knowledge of related technologies such as Kubernetes, CI/CD tools, Linux administration, scripting languages like Bash or Python, and cloud platforms like AWS or Azure.

Interview Questions

  • What is Docker and how does it differ from virtual machines? (basic)
  • Explain the difference between an image and a container in Docker. (basic)
  • How do you manage data persistence in Docker containers? (medium)
  • What is Docker Compose and how is it used in container orchestration? (medium)
  • How can you secure your Docker containers? (medium)
  • Explain the use of Docker volumes and bind mounts. (medium)
  • What is Docker Swarm and how does it compare to Kubernetes? (advanced)
  • Describe the networking modes available for Docker containers. (advanced)
  • How would you troubleshoot a Docker container that is not starting up correctly? (medium)
  • What are the advantages of using Docker for microservices architecture? (medium)
  • How can you monitor Docker containers in production environments? (medium)
  • Explain the concept of Dockerfile and its significance in containerization. (basic)
  • What is the purpose of a Docker registry and how does it work? (medium)
  • How do you scale Docker containers horizontally and vertically? (medium)
  • What are the best practices for Docker image optimization? (advanced)
  • Describe the differences between Docker CE and Docker EE. (basic)
  • How can you automate Docker deployments using tools like Jenkins or GitLab CI/CD? (medium)
  • What security measures can you implement to protect Docker containers from vulnerabilities? (medium)
  • How would you handle resource constraints in Docker containers? (medium)
  • What is the significance of multi-stage builds in Docker? (advanced)
  • Explain the concept of container orchestration and its importance in Docker environments. (medium)
  • How do you ensure high availability for Dockerized applications? (medium)
  • What are the key differences between Docker and other containerization technologies like LXC or rkt? (advanced)
  • How would you design a CI/CD pipeline for Dockerized applications? (medium)
  • Discuss the pros and cons of using Docker for development and production environments. (medium)

Closing Remark

As you explore job opportunities in the Docker ecosystem in India, remember to showcase your skills and knowledge confidently during interviews. By preparing thoroughly and staying updated on the latest trends in Docker technology, you can position yourself as a desirable candidate for top companies in the industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies