Home
Jobs

17939 Docker Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

25 - 30 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview We are seeking a highly skilled JMeter Expert with at least 4 years of experience in performance testing using JMeter. The ideal candidate will have hands-on expertise in designing, implementing, and executing performance test scripts to ensure robust system scalability and reliability. The role requires working closely with development and QA teams to analyze performance issues and optimize application performance. Key Responsibilities: Design, develop, and execute performance test scripts using Apache JMeter. Analyze system behavior under various load conditions and identify bottlenecks. Collaborate with developers & QA engineers to ensure requirements are met. Monitor and analyze test results, providing detailed reports with recommendations. Optimize test scripts and infrastructure to improve performance testing efficiency. Work on distributed load testing and integrate JMeter with CI/CD pipelines. Identify, diagnose, and troubleshoot performance issues across various applications. Stay updated with the latest JMeter features, plugins, and best practices. Required Qualifications 4+ years of hands-on experience with Apache JMeter in performance testing. Strong expertise in load, stress, scalability, and endurance testing. Distributed load testing and integrating JMeter with CI/CD pipelines. Proficiency in scripting and parameterization in JMeter. Analyze test results, identify performance bottlenecks, & suggest improvements. Experience in monitoring tools like New Relic, Grafana, or similar. Strong analytical and problem-solving skills. Preferred Skills Experience with cloud-based performance testing tools. Knowledge of APM tools and log analysis. Familiarity with containerized environments (Docker/Kubernetes). Exposure to scripting languages such as Python, Bash, or Groovy. Why Join UsWork in a remote-first environment with a highly skilled team. Cutting-edge technology solutions in educational technology & gaming. Opportunity to innovate and optimize performance testing strategies. Competitive compensation and professional growth opportunities. How to Apply: Interested candidates can submit their resumes along with a brief description of their experience with JMeter to aneesha@thegrowthcocktail.com Let Us Build Something Extraordinary: This is not just a jobit is a chance to lead, create, and inspire.

Posted 1 day ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 day ago

Apply

6.0 years

60 - 65 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

6.0 years

60 - 65 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description JOB DESCRIPTION We are seeking a passionate and experienced Senior Manager of Business Intelligence & Data Engineering to lead and develop a high-performing team of engineers. The scope of this role will be broad and multi-tiered, covering all aspects of the Business Intelligence (BI) ecosystem - designing, building, and maintaining robust data pipelines, enabling advanced analytics, and delivering actionable insights through BI and data visualization tools. You will play a critical role in fostering a collaborative and innovative team environment, while driving continuous improvement across all aspects of the engineering process. Also - key to the success of this role will be an assertiveness and willingness to engage directly with stakeholders, developing relationships while acquiring deep understanding of functional domains (business processes, etc.). Key Responsibilities Lead the design and development of scalable, high-performance data architectures on AWS, leveraging services such as S3, EMR, Glue, Redshift, Lambda, and Kinesis. Architect and manage Data Lakes for handling structured, semi-structured, and unstructured data. Manage Snowflake for cloud data warehousing, ensuring seamless data integration, optimization of queries, and advanced analytics. Implement Apache Iceberg in Data Lakes for managing large-scale datasets with ACID compliance, schema evolution, and versioning. Drive Data Modeling and Productization: Design and implement data models (e.g., star/snowflake schemas) to support analytical use cases, productizing datasets for business consumption and downstream analytics. Work with business stakeholders to create actionable insights using enterprise BI platforms (MicroStrategy, Tableau, Power BI, etc.). Build data models and dashboards that drive key business decisions, ensuring that data is easily accessible and interpretable. Ensure that data pipelines, architectures, and systems are thoroughly documented and follow coding and design best practices. Promote knowledge-sharing across the team to maintain high standards for quality and scalability. Call upon breadth of experience spanning many technologies and platforms to help shape architectural direction Assist end-users in optimizing their analytic usage, visualizing data in a more efficient and actionable fashion, beyond data dumps and grid reports Promote ongoing adoption of business intelligence content through an emphasis on user experience, iterative design refinement and regular training Implement Observability and Error Handling: Build frameworks for operational monitoring, error handling, and data quality assurance to ensure high reliability and accountability across the data ecosystem. Stay Ahead of Industry Trends: Keep abreast of the latest techniques, methods, and technologies in data engineering and BI, ensuring the team adopts cutting-edge tools and practices to maintain a competitive edge. Qualifications 10+ years of experience in Data Engineering or a related field, with a proven track record of designing, implementing, and maintaining large-scale distributed data systems 5+ years of work experience in BI/data visualization/analytics 5+ years of people management experience with experience managing global teams Track record of solving business challenges through technical solutions. Be able to articulate the context behind projects and their impact. Knowledge of CI/CD tools and practices, particularly in data engineering environments Proficiency in cloud-based data warehousing, data modeling, and query optimization Experience with AWS services (e.g., Lambda, Redshift, Athena, Glue, S3) and managing cloud infrastructure Strong experience in Data Lake architectures on AWS, using services like S3, Glue, EMR, and data management platforms like Apache Iceberg Familiarity with containerization tools like Docker and Kubernetes for managing cloud-based services Hands-on experience with Apache Spark (Scala & PySpark) for distributed data processing and real-time analytics Expertise in SQL for querying relational and NoSQL databases, and experience with database design and optimization Proficiency in creating interactive dashboards and reports using drag-and-drop interfaces in enterprise BI platforms, with a focus on user-friendly design for both technical and non-technical stakeholders Experience in microservices-based architectures, messaging, APIs, and distributed systems. Familiarity with embedding BI content into applications or websites using APIs (e.g., Power BI Embedded, MicroStrategy’s HyperIntelligence for zero-code embedding, Tableau’s robust APIs) Able to work in a collaborative environment to support rapid development and delivery of results Exhibit an understanding of business problems and translate those into creative, innovative and practical solutions that deliver high quality services to the business Strong communication and presentation skills, with experience delivering insights to both technical and executive audiences Willing to wear many hats and be flexible with a varying nature of tasks and responsibilities BONUS POINTS Understanding of data science and machine learning concepts, with the ability to collaborate with data science teams Knowledge of Infrastructure as Code (IaC) practices, using tools like Terraform to provision and manage cloud infrastructure (e.g., AWS) for data pipelines and BI systems Familiarity with data governance, security, and compliance practices in cloud environments Domain understanding of Apparel, Retail, Manufacturing, Supply Chain or Logistics About Us Fanatics is building a leading global digital sports platform. We ignite the passions of global sports fans and maximize the presence and reach for our hundreds of sports partners globally by offering products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect, and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans; a global partner network with approximately 900 sports properties, including major national and international professional sports leagues, players associations, teams, colleges, college conferences and retail partners, 2,500 athletes and celebrities, and 200 exclusive athletes; and over 2,000 retail locations, including its Lids retail stores. Our more than 22,000 employees are committed to relentlessly enhancing the fan experience and delighting sports fans globally. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanatics.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Associations (UEFA). At Fanatics Commerce, we infuse our BOLD Leadership Principles in everything we do: Build Championship Teams Obsessed with Fans Limitless Entrepreneurial Spirit Determined and Relentless Mindset Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

DevOps Engineer Location: Kochi, Kerala Position Type: Fulltime Permanent role Salary: 10 -12 LPA + Benefits Experience Required: 6+ Years Required Skills & Qualifications ● Experience: 5+ years in DevOps/Systems Engineering, with 2+ years in Kubernetes on-premises and 3+ years in SQL database management. ● Technical Expertise: ○ Proficiency in Kubernetes cluster management (etcd, control plane, worker nodes). ○ Strong Linux/Windows administration and scripting (Bash, Python). ○ Experience with containerization (Docker) and orchestration tools. ○ Knowledge of on-premises storage (SAN/NAS) and networking (VLANs, firewalls). ○ Advanced SQL skills, including query optimization, schema design, and stored procedures. ○ Experience with database migration tools and techniques. ● Tools: Ansible, Terraform, Jenkins, Prometheus, Grafana, Docker. ● Soft Skills: Leadership, problem-solving, and stakeholder communication. Preferred Skills & Qualifications ● Certifications: CKAD, CKS, or CNCF certifications. ● Experience with service meshes (Istio) or chaos engineering tools. ● Familiarity with hybrid cloud architectures (AWS/Azure integration). ● Knowledge of database replication and sharding techniques. Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What you’ll do This position is at the forefront of Equifax's post cloud transformation, focusing on developing and enhancing Java applications within the Google Cloud Platform (GCP) environment. The ideal candidate will combine strong Java development skills with cloud expertise to drive innovation and improve existing systems Key Responsibilities Design, develop, test, deploy, maintain, and improve software applications on GCP Enhance existing applications and contribute to new initiatives leveraging cloud-native technologies Implement best practices in serverless computing, microservices, and cloud architecture Collaborate with cross-functional teams to translate functional and technical requirements into detailed architecture and design Participate in code reviews and maintain high development and security standards Provide technical oversight and direction for Java and GCP implementations What Experience You Need Bachelor's or Master's degree in Computer Science or equivalent experience 1+ years of IT experience with a strong focus on Java development Experience in modern Java development and cloud computing concepts Familiarity with agile methodologies and test-driven development (TDD) Strong understanding of software development best practices, including continuous integration and automated testing What could set you apart Experience with GCP or other cloud platforms (AWS, Azure) Active cloud certifications (e.g., Google Cloud Professional certifications) Experience with big data technologies (Spark, Kafka, Hadoop) and NoSQL databases Knowledge of containerization and orchestration tools (Docker, Kubernetes) Familiarity with financial services industry Experience with open-source frameworks (Spring, Ruby, Apache Struts, etc.) Experience with Python We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 day ago

Apply

0.0 - 3.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

The Role- As a Product Technical Lead , you will act as the bridge between the product vision and technical execution. You will lead product architecture discussions, define technical roadmaps, and guide engineering teams to deliver high-performance, scalable solutions for our AI chatbot platform – BotPenguin. This is a high-impact role that demands strategic thinking, hands-on development expertise, and leadership skills to align cross-functional teams toward product success. You will be closely working with product managers, senior engineers, AI experts, and business stakeholders. You will also be responsible for conducting code reviews, mentoring junior developers, and ensuring high software quality standards. This role offers exciting opportunities to build impactful AI-driven solutions and shape the future of conversational automation. What you need for this role- Education: Bachelor's degree in Computer Science, IT, or related field. Experience: 5 + years of experience in software engineering with at least 2+ years in a technical leadership role. Technical Skills: Proven experience in scalable system design and product architecture . Strong understanding of MEAN/MERN Stack technologies. Experience in software architecture planning and low-level design. Ability to define and implement product-level architectural patterns. Ability to create and implement scalable, high-performance solutions. Hands-on experience in backend API development & UI integration. Familiarity with cloud platforms like AWS and containerisation (Docker, Kubernetes). Understanding of AI/ML concepts in development. Knowledge of version control tools like GitLab/GitHub and project management tools like Notion . Soft Skills : Strong analytical mindset, leadership skills, and a passion for mentoring junior developers. What you will be doing- Lead technical architecture design and roadmap planning for BotPenguin’s core platform. Work alongside the Product Manager to align product vision with technical execution. Collaborate with engineering teams to translate product requirements into scalable solutions . Design and develop core modules of the platform, especially those related to automation, chat assignment, analytics, and multi-agent support . Implement and enforce technical best practices , coding guidelines, and documentation standards. Evaluate and integrate LLM models, AI agents , and automation tools as per evolving product needs. Ensure performance, security, and scalability of applications across global deployments. Support Customer Success and QA teams with technical issue resolution and RCA . Drive technical discussions, conduct code reviews, and ensure timely feature delivery. Foster a culture of continuous improvement, collaboration, and innovation within the tech team. Collaborate with the Product Team to plan and implement technical solutions for new features. Work closely with Technical Leads & Senior Developers to define software architecture and create low-level designs. Conduct code reviews to ensure adherence to best practices and coding standards. Develop backend APIs and integrate them with frontend applications. Conduct automated unit & integration testing to ensure high code quality. Document technical processes, APIs, and troubleshooting guides. Monitor system performance and suggest improvements to optimize efficiency. Assist the Customer Success Team in resolving technical challenges and enhancing user experience. Mentor junior engineers, providing guidance on best practices and career growth. Any other task relevant to the product that may be needed. Top reasons to work with us- Lead the architecture and evolution of a fast-growing AI product used globally. Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹1,800,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Technical leadership: 2 years (Required) AWS: 2 years (Required) MERN/MEAN: 3 years (Required) Work Location: In person

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Description CodeChavo is a global digital transformation solutions provider, partnering closely with top technology companies to drive impactful change. Powered by technology and led by purpose, CodeChavo supports clients from design to operation. With deep domain expertise, CodeChavo integrates innovation and agility into client organizations. We specialize in helping companies outsource their digital projects and build quality tech teams. Role Description This is a full-time on-site role for a QA Engineer SDET 2 (Playwright) based in Gurugram. The QA Engineer will be responsible for executing and managing software tests, ensuring quality assurance, performing manual testing, and developing and maintaining test cases. The role involves working closely with the development team to ensure the highest standards of software quality are met. Qualification: Quality Assurance and Software Testing skills Experience in creating and maintaining Test Cases Manual Testing skills Proficiency in QA Automation Skills & Qualifications: 3+ years of experience in automation testing with exposure to both UI and API testing. Strong programming skills in Java and JavaScript. Hands-on experience with tools like Selenium, Playwright, and Postman. Good understanding of CI/CD tools such as Jenkins, Git-based workflows, and automated deployment pipelines. Familiarity with bug tracking and test management tools (e.g., JIRA, TestRail). Preferred Qualifications: Exposure to performance or load testing tools (e.g., JMeter). Experience with BDD tools like Cucumber. Working knowledge of containerized environments (e.g., Docker). Understanding of Agile methodologies and DevOps practices. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Java Developer - Software Engineer Experience: 4-9 Years Location: Chennai (HYBRID) Interview: F2F Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage) Job Description: Overall 4+ years of experience in Java Development Projects 3+Years of development experience in development with React 2+Years Of experience in AWS Cloud, Devops. Microservices development using Spring Boot Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+ Show more Show less

Posted 1 day ago

Apply

40.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Iamneo Founded in 2016 and now part of the NIIT family, iamneo is a fast-growing, profitable B2B EdTech SaaS company that’s transforming how tech talent is upskilled, evaluated, and deployed. Our AI-powered learning and assessment platforms help enterprises and educational institutions build future-ready talent at scale. We specialize in Talent Upskilling, Assessment, and Workforce Transformation across sectors like ITeS, BFSI, and Higher Education. Our solutions are trusted by top corporates such as Wipro, HCLTech, LTIMindtree, Virtusa, Tech Mahindra, and Hexaware, and over 150+ leading institutions including BITS Pilani, VIT, SRM, LPU, and Manipal. As an NIIT Venture, we’re backed by NIIT’s 40+ years of legacy in learning and talent development — combining their global reputation and deep domain expertise with our AI-first, product-driven approach to modern upskilling. If you are passionate about innovation, growth, and redefining the future of tech learning — iamneo is the place for you. About The Role We’re looking for a Senior DevOps & Cloud Operations Engineer who can take end-to-end ownership of our cloud infrastructure and DevOps practices, with proven expertise in both Google Cloud Platform (GCP) and Microsoft Azure. This role is critical to driving scalable, secure, and high-performance deployment environments for our applications. If you thrive in a multi-cloud, automation-first environment and enjoy building robust systems that scale, we’d love to hear from you. 🔧 What You’ll Do Architect, deploy, and manage scalable, secure, and highly available cloud infrastructure Lead infrastructure optimization initiatives including performance tuning, cost control, and capacity planning Design and implement CI/CD pipelines using tools like Jenkins, GitHub Actions,Cloud Build or similar. Automate infrastructure provisioning and configuration using Terraform, Ansible, or similar tools Manage containerized environments using Docker and Kubernetes, with best practices for orchestration and lifecycle management Work with microservice-based architectures and support seamless deployment workflows Implement configuration management using tools such as Terraform, Ansible, or others. Set up and maintain monitoring, alerting, and logging systems (e.g., Prometheus, Grafana, Azure Monitor, Sentry, New Relic) Write automation and operational scripts in Bash, Python, or equivalent scripting languages Ensure security controls, compliance, and DevSecOps practices are implemented across environments Conduct regular infrastructure audits, backups, and disaster recovery drills Troubleshoot and resolve infrastructure-related issues proactively Collaborate with product and development teams to align infrastructure with application and business needs Support platform transitions, version upgrades, and cloud migration efforts Mentor junior engineers and promote DevOps best practices across teams ✅ What We’re Looking For 5+ years of hands-on experience in DevOps, cloud infrastructure, and system reliability Strong experience across cloud platforms with a preference for exposure to both GCP and Azure Proven expertise in CI/CD, infrastructure-as-code, and container orchestration Proficiency in scripting using Bash, Python, or similar languages Solid understanding of cloud-native and microservices architectures Strong problem-solving, documentation, and communication skills High ownership mindset and ability to work in fast-paced environments 🌟 Bonus Points For GCP and/or Azure certifications Experience with Agile and DevOps cultural practices Prior experience deploying Node.js, Python, or similar web applications Ability to work in fast paced environments Skills: azure monitor,bash,python,gcp,jenkins,ansible,sentry,kubernetes,new relic,grafana,infrastructure,ci/cd,microsoft azure,devops,docker,cloud build,cloud,azure,prometheus,terraform,google cloud platform (gcp),github actions Show more Show less

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we're shaping the future and making a meaningful impact on the world. About The Role We at Innovaccer are looking for a Site Reliability Engineer Database-II to build the most amazing product experience. You'll get to work with other engineers to build delightful feature experience to understand and solve our customer's pain points A Day in the Life Database design, model, implement and size large scale systems using Snowflake, PostgreSQL and MongoDB Responsible for Provisioning, availability 24X7, reliability, performance, Security, maintenance, upgrades, and cost optimization Capacity planning of large-scale database clusters Automate DB provisioning, deployments, routine administration, maintenance and upgrades Address the business critical incidents P0/P1 within the SLA, identify the RCA and address the issue permanently Sync data between multiple data stores (eg: PostgreSQL to ES and Snowflake to ES) Design, document and benchmark the Snowflake or MongoDB DB Maintenance, Backup, Health check, alerting and Monitoring Create processes, best practices, and enforce Identify and tune the long running queries to improve DB performance and to reduce the cost What You Need Having 4-8 years of experience Work in a fast-paced environment with the agility to change directions as per business needs Hands-on experience on SQL query writing along with Python or any other scripting language in any database environments Demonstrated experience any cloud environment like AWS, Azure and GCP Having in-depth knowledge on any two of MongoDB , Redis or Elasticsearch Knowledge on PostgreSQL / Snowflake / MySQL is a plus Setup high availability, replication and incremental backups for various datastores Setup database security best practices like encryption, auditing and Role based access control Knowledge on DB design principles, partitioning / shading and query optimization Expert in troubleshooting database performance issues in production Demonstrated experience with any cloud managed databases and self hosted databases, managing medium to large sized production Experience in building proof of concepts, trying out new solutions and improving existing systems with best practices to solve business problems and support scaling Having knowledge/ experience with Terraform , Jenkins , Ansible is a plus Having knowledge on database monitoring stack such as Prometheus and grafana Having expertise on Docker and Kubernetes is mandatory Should be proactive and have the intellect to explore and come up with solutions to complex technical issues Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where And How We Work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer's EHR-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Show more Show less

Posted 1 day ago

Apply

0.0 - 1.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

We are seeking a highly motivated and skilled DevOps Engineer with 1.5-2 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in Linux, infrastructure automation, containerization, orchestration tools, and cloud platforms. This role offers an opportunity to work on cutting-edge technologies and contribute to the development and maintenance of scalable, secure, and efficient CI/CD pipelines. Experience:- 1 to 2yrs Location:- Phase 8B Mohali (Punjab) Key Responsibilities: ● Design, implement, and maintain scalable CI/CD pipelines to streamline software development and deployment. ● Deploy, configure, and manage containerized applications using Docker and orchestrate them with Kubernetes . ● Develop and maintain Helm charts for managing Kubernetes deployments. ● Automate repetitive operational tasks using scripting languages such as Python , Bash , or PowerShell . ● Collaborate with development teams to ensure seamless integration and delivery of applications. ● Monitor and troubleshoot system performance, ensuring high availability and reliability of services. ● Configure and maintain cloud infrastructure on AWS , Azure , or Google Cloud Platform (GCP) . ● Implement and maintain security best practices in cloud environments and CI/CD pipelines. ● Manage and optimize system logs and metrics using monitoring tools like Prometheus, Grafana, ELK Stack, or Cloud-native monitoring tools. Key Requirements: ● Experience : 1-2 years in a DevOps or similar role. ● Linux : Strong proficiency in Linux-based systems, including configuration, troubleshooting, and performance tuning is must ● Kubernetes : Experience with Kubernetes for container orchestration, including knowledge of deployments, services, pv, pvc and ingress controllers. ● CI/CD Tools : Knowledge of tools like Jenkins , GitHub Actions , GitLab CI/CD , or CircleCI for continuous integration and deployment. ● Cloud Platforms : Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP). ● Scripting : Proficiency in automation scripting using Python , Bash , or similar languages. ● Monitoring : Understanding of monitoring and logging tools such as Prometheus , Grafana , or ELK Stack . ● Version Control : Strong experience with version control tools like Git . Preferred Qualifications: ● Knowledge of networking concepts (e.g., DNS, load balancing, firewalls). ● Familiarity with security practices such as role-based access control (RBAC) and secrets management. ● Exposure to Agile/Scrum methodologies and tools like Jira. ● Certification in any of the cloud platforms ( AWS Certified DevOps Engineer , Azure DevOps Expert , or GCP Professional DevOps Engineer ) is a plus. Soft Skills: ● Strong problem-solving and troubleshooting skills. ● Ability to work collaboratively in a team-oriented environment. ● Excellent communication and documentation skills. ● Proactive approach to learning new tools and technologies. Note:- Immediate joiners and nearby Mohali location candidate preferred Job Type: Full-time Pay: ₹25,000.00 - ₹35,000.00 per month Benefits: Paid time off Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 1 year (Required) Work Location: In person Speak with the employer +91 8699032616

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description Build robust ML pipelines and automate model training, evaluation, and deployment. Optimize and tune models for financial time-series, pricing engines, and fraud detection. Collaborate with data scientists and data engineers to deploy scalable and secure ML models. Monitor model drift, data drift, and ensure models are retrained and updated as per regulatory norms. Implement CI/CD for ML and integrate with enterprise applications. Tech Stack Languages: Python ML Platforms: MLflow, Kubeflow MLOps Tools: Airflow, MLReef, Seldon Libraries: scikit-learn, XGBoost, LightGBM Cloud: GCP AI Platform Containerization: Docker, Kubernetes Job Category: AI/ML Engineer Job Type: Full Time Job Location: Mumbai Exp-Level: 3 to 5 Years Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Recent Comments Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary We are looking for a skilled and experienced Backend Python Developer with strong hands on expertise in Django. The ideal candidate should also have working knowledge of frontendframeworks like React.js or Node.js, enabling seamless collaboration with UI teams or full-stack capability. You will play a key role in designing, developing, and maintaining robust backend systems and APIs that power our applications. Key Responsibilities Design and implement scalable, secure, and maintainable backend solutions using Python and Django. Build and integrate RESTful APIs to connect frontend applications and third-party services. Write reusable, testable, and efficient code following best practices. Collaborate with frontend developers to integrate user-facing elements with server-side logic. Maintain code quality through automated tests, code reviews, and continuous integration. Optimize performance of applications, including database queries and caching strategies. Participate in architectural discussions and contribute to technical documentation. Deploy and maintain applications on cloud environments (AWS, Azure, GCP preferred). Ensure adherence to security and data protection best practices. Required Skills And Qualifications 3+ years of hands-on experience in Python, with solid knowledge of Django framework. Proficiency in building RESTful APIs using Django REST Framework (DRF). Strong experience with relational databases (PostgreSQL, MySQL) and ORMs. Familiarity with JavaScript and frontend frameworks like React.js or server-side JavaScript with Node.js. Good understanding of HTML, CSS, and JavaScript. Experience with version control systems (e.g., Git). Understanding of containerization tools like Docker and deployment pipelines. Familiarity with cloud platforms like AWS, Azure, or GCP. Excellent problem-solving skills and ability to work independently. Preferred/Good To Have Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). Exposure to microservices or serverless architecture. Familiarity with Agile methodologies. Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Skills: aws,mysql,python,javascript,react native,postgresql,django rest framework,django,html,restful apis,git,gcp,sql,react.js,docker,node.js,css,azure Show more Show less

Posted 1 day ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Mumbai, Gurugram

Work from Office

Naukri logo

Education : B.E./B.Tech/MCA in Computer Science Experience : Must have 10+ years relevant experience in the field of DevOPS Role Summary: A highly experienced DevOps Architect and Level 4 DevOps Subject Matter Expert (SME) with deep technical expertise in building scalable, secure, and fully automated infrastructure environments. This role is focused on delivering robust DevOps solutions, establishing architecture best practices, and driving automation across development and operations teams. The ideal candidate is a hands-on DevOps expert with advanced skills in cloud, containers, CI/CD, and infrastructure as code, combined with the ability to clearly articulate complex architectural strategies and troubleshoot critical issues. Additionally, this role emphasizes system resilience through the development of self-healing mechanisms and proactive failure detection. Skills & Expertise: Deep hands-on expertise in CI/CD tools, including Jenkins, Azure DevOps, Helm, GIOPS, and ArgoCD, for implementing reliable, automated software delivery pipelines. Advanced Infrastructure as Code (IaC) experience with tools such as Terraform, Ansible, SaltStack, ARM Templates, and Google Cloud Deployment Manager, enabling scalable and consistent infrastructure provisioning. Expert-level understanding of container platforms, particularly Kubernetes and Docker, for orchestrated, secure, and highly available deployments. Deep expertise in Kubernetes operations, including production-grade cluster management, autoscaling, Helm chart development, RBAC configuration, ingress controllers, and network policy enforcement. Extensive cloud experience across ROS, Azure, and GCP, with deep knowledge of core services, networking, storage, identity, and security implementations. Strong scripting and automation capabilities using Bash, Python, or Go, enabling development of robust automation tools and system-level integrations. Comprehensive monitoring and observability expertise, with Prometheus, Grafana, and the ELK stack for end-to-end visibility, alerting, and performance analysis. Expert in designing and implementing secure, scalable, and resilient DevOps architectures, aligned with industry best practices for both cloud-native and hybrid environments. Extensive experience in artifact management using JFrog Artifactory or Nexus, including repository structure, lifecycle management, promotion strategies, and access controls. Proficient in identifying infrastructure or application failures, performing root cause analysis, and developing self-healing scripts to restore service availability automatically and reduce manual intervention. Familiar with DevSecOps and compliance frameworks, including IAM policies, secrets management, least privilege access, and policy-as-code practices. Recognized DevOps expert and L4 SME, supporting and mentoring engineering teams in DevOps adoption, tooling, automation strategies, and architectural decision-making. Continuously evaluates and recommends emerging tools, frameworks, and practices to enhance deployment speed, pipeline efficiency, and platform reliability. Experienced in diagnosing and resolving complex infrastructure and deployment issues, ensuring high system availability and minimal disruption. Clear and confident communicator, able to present and explain architectural strategies and system design decisions to technical and non-technical stakeholders.

Posted 1 day ago

Apply

0.0 - 1.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

About the Role: We’re looking for a skilled and motivated Python Developer to join our engineering team. You will be responsible for developing scalable backend systems, APIs, and integrations, and collaborating with cross-functional teams to deliver robust and high-performing solutions. Experience: - 1yrs -3yrs Note: Immediate joiners and candidates located nearby are preferred. Key Responsibilities: Design, develop, and maintain backend services and APIs using Python. Work with frameworks like Django, Flask, or FastAPI. Build and integrate RESTful and/or GraphQL APIs. Write clean, scalable, and well-documented code. Collaborate with frontend developers, DevOps, and product teams. Implement best practices in software development and testing. Requirements: Strong proficiency in Python and understanding of OOP principles. Experience with Django/Flask/FastAPI. Knowledge of relational (PostgreSQL, MySQL) and/or NoSQL databases (MongoDB). Familiarity with Docker, Git, and CI/CD pipelines. Experience with cloud platforms (AWS, GCP, or Azure) is a plus. Good problem-solving and communication skills. Job Type: Full-time Pay: ₹25,000.00 - ₹35,000.00 per month Benefits: Paid time off Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Python/Django: 1 year (Required) Work Location: In person

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Full Stack PHP Developer Position : Full Stack PHP Developer Experience Level : 2-3 years Employment Type : Full-time About the Role: We are seeking a skilled Full Stack PHP Developer to join our development team. The ideal candidate will have hands-on experience building robust web applications using PHP and modern web technologies. You'll work on both front-end and back-end development, contributing to the entire software development lifecycle. Key Responsibilities: Develop and maintain web applications using PHP frameworks (Laravel, Symfony, or CodeIgniter) Design and implement responsive front-end interfaces using HTML5, CSS3, JavaScript, and modern frameworks Build and optimize database schemas and write efficient SQL queries (MySQL, PostgreSQL) Integrate third-party APIs and develop RESTful web services Collaborate with cross-functional teams to define, design, and ship new features Debug and troubleshoot application issues across the full stack Write clean, maintainable, and well-documented code Participate in code reviews and follow best practices for version control (Git) Optimize application performance and ensure scalability Required Qualifications: 2-3 years of professional experience in PHP development Strong proficiency in at least one PHP framework (Laravel preferred) Solid understanding of front-end technologies: HTML5, CSS3, JavaScript, jQuery Experience with modern JavaScript frameworks/libraries (React, Vue.js, or Angular) Proficiency in database design and management (MySQL, PostgreSQL) Knowledge of version control systems (Git) Understanding of RESTful API development and integration Familiarity with responsive web design principles Basic understanding of server management and deployment processes Preferred Qualifications: Experience with cloud platforms (AWS, Google Cloud, or Azure) Knowledge of containerization technologies (Docker) Familiarity with testing frameworks (PHPUnit, Jest) Understanding of Agile/Scrum methodologies Experience with package managers (Composer, npm) Knowledge of web security best practices Exposure to DevOps practices and CI/CD pipelines What We Offer: Competitive salary commensurate with experience Health and dental insurance Professional development opportunities Flexible work arrangements Collaborative and innovative work environment Opportunity to work on challenging projects with modern technologies How to Apply: Please submit your resume, portfolio, and a brief cover letter explaining why you are interested in this position. Include links to relevant projects or GitHub repositories that showcase your work. Show more Show less

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About the Company: Relay Human Cloud is a young & dynamic company that helps some of the top US-based companies to expand their team internationally. Relay is a truly global company having its operations in US, India, Honduras, and Mexico (We are also adding a few more countries soon). Our core focus is to enable companies to connect with the best international talent. Relay helps its clients in majorly following areas: Accounting & Finance, Administration, Operations, Space Planning, Leasing, Data Science, Data Search, Machine Learning and Artificial Intelligence etc. Relay India operates from Ahmedabad and Vadodara offices. Job Summary: We are seeking a .NET Developer to join our software development team in Ahmedabad. The ideal candidate should be proficient in both back-end and front-end development, capable of building and maintaining high-performance web applications while ensuring scalability and security. Key Responsibilities: Design, develop, and maintain web applications using .NET 8/.NET Core / .NET Framework, C#, ASP.NET MVC, and Web API. Implement front-end solutions using React.js / Angular / Vue.js, JavaScript (TypeScript preferred), HTML, and CSS. Develop and optimize SQL Server / PostgreSQL / MySQL database schemas and queries. Build and integrate RESTful APIs for seamless communication between front-end and back-end systems. Work with cloud platforms (Azure) for deployment and infrastructure management. Collaborate with cross-functional teams to gather and refine technical requirements. Ensure code quality through code reviews, unit testing, and debugging. Stay updated with the latest industry trends and best practices in software development. Required Skills & Qualifications: 2-3 years of experience as a Full Stack Developer with strong .NET expertise. Strong back-end skills in .NET 8 / .NET Core / .NET Framework, C#, ASP.NET MVC, and Web API. Proficiency in React.js / Angular / Vue.js, JavaScript, HTML, and CSS. Experience with SQL databases (SQL Server, PostgreSQL, MySQL) and ORM frameworks like Entity Framework. Familiarity with cloud services (Azure) and DevOps tools for CI/CD. Understanding of microservices architecture and containerization (Docker, Kubernetes) is a plus. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job description We're seeking a highly skilled Backend Engineer to design, develop, and maintain the robust infrastructure that powers our applications. In this role, you'll translate complex business needs into scalable, efficient, and production-ready solutions. You'll be a key player in shaping our technology and have a direct impact on the success of our products. If you excel at building high-performing systems and possess a strong background in Node.js ecosystems, we encourage you to apply. What You'll Do Develop, and maintain high-performance Node.js APIs to support core business functions. Design and implement optimised data models for both SQL and NoSQL databases. Apply advanced software design patterns to address scalability, maintainability, and performance challenges. Develop and deploy microservices with a focus on balancing technical excellence and business objectives. Establish comprehensive automated testing strategies to ensure application reliability and stability. Integrate external services and APIs using well-defined APIs and event-driven architectures. Implement monitoring and logging solutions to provide actionable insights into system health and performance. Your Qualifications 6+ years of experience in building and maintaining production-level backend systems. Expertise in Node. js, modern JavaScript (ES6+), and TypeScript. Proven experience applying software design patterns in real-world projects. Hands-on experience with Node. js frameworks such as Express, Koa, or NestJS. Solid experience with both SQL (e. g., PostgreSQL) and NoSQL (e. g., MongoDB) databases. Proficiency in containerisation (Docker) Experience in designing and managing CI/CD pipelines for backend applications. Experience with cloud platforms, preferably AWS, for deployment and management. A strong commitment to writing clean, testable, and secure code. Bonus Points Experience designing and building highly scalable and complex systems. Experience with message brokers (e. g., Kafka, RabbitMQ). In-depth knowledge of OAuth 2.0, JWT, and API security. Experience with monitoring tools like Prometheus and Grafana/ELK. Familiarity with Infrastructure as Code tools (e. g., Terraform). Demonstrated ability to debug and optimise performance in production environments. Contributions to open-source projects or technical publications. Technical Skills Assessment We'll evaluate your skills through: Live Coding: Solving practical backend development problems using Node. js and TypeScript. System Design: Architecting scalable systems and clearly articulating design decisions. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure . Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation . Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS , or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog , and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines ( Airflow, ML Pipelines ) and Bedrock with Tensorflow or PyTorch . Implement and optimize ETL/data streaming pipelines using Kafka , EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash , along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare , VPC , and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary Support for continual learning (free books and online courses) Leveling Up Opportunities Diverse team environment Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—having a dedicated our financial client focused on your success in today's fast-evolving digital world. Job Title: Senior DevSecOps / CICD Platform Engineer. Location: Bengaluru / Hyderabad. Experience: 8+ years. Job Summary: We are seeking a highly experienced Platform Engineer in CI/CD to lead the design, implementation, and optimization of our continuous integration and continuous delivery infrastructure. This role requires deep technical proficiency, a strategic mindset, and the ability to influence engineering best practices across teams. As a CI/CD Engineering Specialist, you will work closely with software engineers, DevOps teams, and security stakeholders to build and maintain resilient, scalable, and secure delivery pipelines that accelerate software development and deployment. The ideal candidate will also have practical experience: Building and operating CI/CD platforms in regulated, multi-tenant environments. Enabling DevSecOps pipelines that support JavaScript, TypeScript, Node.js, Spring Boot, and containerized workloads. Delivering developer self-service and golden paths via portal-based automation (e.g., Backstage.io). Securing software supply chains and automating risk visibility through tools like Snyk, Chainguard, and Wiz.io. Role and Responsibilities: Next-Gen CI/CD Platform Design and implement container-native CI/CD pipelines using Harness, ArgoCD, and GitHub Actions. Create reusable templates and workflows that support multiple application stacks. Ensure delivery pipelines are secure, scalable, and compliant with audit and regulatory requirements. Administer and enforce GitHub SCM policies across large engineering teams. Artifact & Supply Chain Security Manage artifacts and dependencies using Maven, NPM, NuGet, and Nexus IQ. Integrate security scanning and license validation with Snyk, Chainguard, and StrongDM. Standardize and govern dependency hygiene across CI/CD stages. Observability & DevOps Intelligence Instrument and monitor build environments with Datadog, OpenTelemetry (OTEL), Splunk, and Cribl. Automate analytics and insights using AAI (Automated Analytics & Intelligence) to improve platform reliability. Track compliance and performance metrics across DevSecOps workflows. Automation & Developer Enablement Use Terraform Enterprise and Ansible Automation Platform for infrastructure provisioning. Integrate developer portals (Backstage.io) for self-service environment and pipeline provisioning. Implement secure secrets and identity practices using AWS IAM, KMS, Secrets Manager, and Okta. Required Skills: CI/CD Platform Engineering : Harness.io, ArgoCD, Docker, GitHub Actions, and compliant pipeline design Artifact & Dependency Management: Nexus IQ/RM, NPM, NuGet, Maven, Snyk, Chainguard Software Configuration Management (SCM) : GitHub administration, branching strategies, and Backstage.io integration Nice to Have: Certifications : AWS DevOps Engineer, Certified Kubernetes Administrator (CKA), etc. Experience with monitoring/logging tools (e.g., Prometheus, ELK, Grafana, Datadog, Splunk) Contributions to open-source DevOps or CI/CD tools Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Associate Software Engineer (Azure Devops) Job Location: Hyderabad, India Worksite: Onsite (100%) About WCT: WaferWire Technology Solutions (WCT) specializes in delivering comprehensive Cloud, Data and AI solutions through Microsoft's technology stack. Our services include Strategic Consulting, Data/AI Estate Modernization, and Cloud Adoption Strategy. We excel in Solution Design encompassing Application, Data, and AI Modernization, as well as Infrastructure Planning and Migrations. Our Operational Readiness services ensure seamless DevOps, ML Ops, AI Ops, and Sec Ops implementation. We focus on Implementation and Deployment of modern applications, continuous Performance Optimization, and future-ready innovations in AI, ML, and security enhancements. Delivering from Redmond-WA, USA, Guadalajara, Mexico and Hyderabad, India, our scalable solutions cater precisely to diverse business requirements and multiple time zones (US time zone alignment). About the Role: We are looking for a passionate and enthusiastic fresher to join our DevOps team. As an Azure DevOps Engineer, you will assist in automating and streamlining our development and deployment processes. This is a fantastic opportunity for someone eager to kickstart their career in cloud computing, CI/CD, and infrastructure automation. Key Responsibilities: Assist in setting up and maintaining Azure DevOps pipelines for build, test, and deployment. Support version control, branch management, and code integration using Git/Azure Repos. Collaborate with developers, QA, and IT teams to automate processes and improve delivery speed. Monitor system performance and troubleshoot deployment issues. Learn and implement Infrastructure as Code using ARM templates, Bicep, or Terraform. Support the containerization of applications using Docker and orchestration with Kubernetes (AKS). Maintain and update documentation related to DevOps processes and tools. Skills & Qualifications: Bachelor's degree in Computer Science, IT, or related field. Basic understanding of Azure cloud services and DevOps principles. Familiarity with CI/CD pipelines, Git, scripting (PowerShell, Bash), and YAML. Eagerness to learn tools like Azure Pipelines, Azure Boards, Repos, and Artifacts. Good problem-solving and communication skills. Optional: Knowledge of Docker, Kubernetes, or any programming language (C#, Python, Java, etc.) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We're looking for a passionate and experienced Solution Architect – AWS Cloud to join our dynamic team at SHI LOCUZ ! Experience: 5+ years Location: Hyderabad Availability: Immediate / Early Joiners preferred Note: Hands-on experience with AWS is a must Key Skills & Responsibilities: ✅ Deep expertise in AWS cloud platform – architecture, services, and enterprise-scale implementations ✅ Strong hands-on experience with Kubernetes & Docker ✅ Ability to translate business requirements into scalable, secure cloud solutions ✅ Experience in selecting and integrating AWS services effectively ✅ Leadership in guiding teams and resolving technical issues ✅ Solid scripting skills (Linux/Windows) & automation experience ✅ Proven ability to lead POCs or Professional Services workloads ✅ Passion for knowledge sharing and continuous improvement This is your chance to work on cutting-edge cloud projects and make an impact! Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

About Us We are building a next-generation digital healthcare platform focused on mental health, wellness, and integrated hospital systems. Our platform leverages AI, 3D motion capture, and personalized coaching to support users across schools, hospitals, and digital clinics. We’re now expanding our backend ERP capabilities using Odoo to build robust tools for registration, appointments, workflows, and clinical operations. Role Overview We’re looking for a mid-senior level Odoo Developer to lead and manage the development of our Healthcare & Hospital ERP modules . You’ll work with our cross-functional team (product, design, AI, clinical operations) to build scalable, secure, and modular systems in Odoo. This is a critical backend role with full ownership of Odoo-based implementation and custom module development. Key Responsibilities Design and develop custom Odoo modules for healthcare and hospital management workflows. Build features such as patient registration, appointment scheduling, role-based access, EMR, billing, reporting, and entity management (hospitals, schools, users). Integrate Odoo with our mobile/web frontend and AI-based doctor tools. Customize existing Odoo modules to suit healthcare use cases. Ensure code quality, security, and modularity in all Odoo implementations. Collaborate with product and backend leads to define system architecture. Maintain documentation and support deployments. Must-Have Skills 5–6 years of experience in Odoo development (v13+). Strong command of Python, PostgreSQL, and Odoo ORM. Hands-on experience in customizing and building Odoo modules from scratch. Good understanding of Odoo’s core modules (CRM, HR, Accounting, Inventory) and ability to extend them. Experience with API integrations (REST/GraphQL). Clean coding practices with Git-based version control. Experience with healthcare or workflow-heavy ERP is a big plus. Good to Have Prior experience building Hospital Information Systems (HIS) or healthcare CRMs. Familiarity with HL7/FHIR standards or medical data structures. Knowledge of DevOps (Docker, CI/CD) for deployment. Comfort working in startup environments and taking ownership. Work Environment Flexible work mode – you can work fully remotely or from our Bengaluru office. Collaborative and fast-paced product team. Strong focus on healthcare innovation and impact. Salary Competitive – based on experience and fit. Includes performance-linked incentives. How to Apply Send your resume, GitHub (if any), and a brief note about your Odoo experience to kushal@cadabams.com with subject “Odoo Developer – Healthcare ERP” . Show more Show less

Posted 1 day ago

Apply

Exploring Docker Jobs in India

Docker technology has gained immense popularity in the IT industry, and job opportunities for professionals skilled in Docker are on the rise in India. Companies are increasingly adopting containerization to streamline their development and deployment processes, creating a high demand for Docker experts in the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

These cities are known for their vibrant tech scene and host a large number of companies actively seeking Docker professionals.

Average Salary Range

The salary range for Docker professionals in India varies based on experience levels. Entry-level positions may start at around ₹4-6 lakhs per annum, while experienced Docker engineers can earn upwards of ₹15-20 lakhs per annum.

Career Path

In the Docker job market, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving into roles like Tech Lead or DevOps Engineer as one gains more experience and expertise in Docker technology.

Related Skills

In addition to Docker expertise, professionals in this field are often expected to have knowledge of related technologies such as Kubernetes, CI/CD tools, Linux administration, scripting languages like Bash or Python, and cloud platforms like AWS or Azure.

Interview Questions

  • What is Docker and how does it differ from virtual machines? (basic)
  • Explain the difference between an image and a container in Docker. (basic)
  • How do you manage data persistence in Docker containers? (medium)
  • What is Docker Compose and how is it used in container orchestration? (medium)
  • How can you secure your Docker containers? (medium)
  • Explain the use of Docker volumes and bind mounts. (medium)
  • What is Docker Swarm and how does it compare to Kubernetes? (advanced)
  • Describe the networking modes available for Docker containers. (advanced)
  • How would you troubleshoot a Docker container that is not starting up correctly? (medium)
  • What are the advantages of using Docker for microservices architecture? (medium)
  • How can you monitor Docker containers in production environments? (medium)
  • Explain the concept of Dockerfile and its significance in containerization. (basic)
  • What is the purpose of a Docker registry and how does it work? (medium)
  • How do you scale Docker containers horizontally and vertically? (medium)
  • What are the best practices for Docker image optimization? (advanced)
  • Describe the differences between Docker CE and Docker EE. (basic)
  • How can you automate Docker deployments using tools like Jenkins or GitLab CI/CD? (medium)
  • What security measures can you implement to protect Docker containers from vulnerabilities? (medium)
  • How would you handle resource constraints in Docker containers? (medium)
  • What is the significance of multi-stage builds in Docker? (advanced)
  • Explain the concept of container orchestration and its importance in Docker environments. (medium)
  • How do you ensure high availability for Dockerized applications? (medium)
  • What are the key differences between Docker and other containerization technologies like LXC or rkt? (advanced)
  • How would you design a CI/CD pipeline for Dockerized applications? (medium)
  • Discuss the pros and cons of using Docker for development and production environments. (medium)

Closing Remark

As you explore job opportunities in the Docker ecosystem in India, remember to showcase your skills and knowledge confidently during interviews. By preparing thoroughly and staying updated on the latest trends in Docker technology, you can position yourself as a desirable candidate for top companies in the industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies