Jobs
Interviews

15819 Containerization Jobs - Page 41

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

📣 We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. ✔️ Experience: Minimum 2+Years 📍 Locations: Noida ✔️ Must Have Required Skills: Minimum 2+year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling . Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma 🔎 Key Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming  Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. 🎓 Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. 🌐 To know more about us Visit : https://algo8.ai/ 📧 Interested Applicants can share the resume: smita.choudhury@algo8.ai

Posted 4 days ago

Apply

2.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Position: ML/AI Developer Experience: 2+ years Location: Raipur (Hybrid/On-site/ Remote) Overview: We at Magure Softwares are seeking a highly skilled ML/AI Developer who not only excels in building intelligent systems but also owns the complete lifecycle from development to deployment. You’ll be part of a cutting-edge team working on AI-driven solutions, real-time data pipelines, and scalable ML products that make real business impact. We are looking for someone who is not just comfortable with writing models but is also proficient in production-level deployment, versioning, MLOps, and problem-solving through DSA. Key Responsibilities: 1. Development & Model Engineering:  Build, train, and optimize machine learning models for various domains.  Use techniques such as regression, classification, NLP, deep learning, and RAG (retrieval augmented generation).  Implement data structures and algorithmic approaches to optimize model performance. 2. Deployment & MLOps:  Deploy ML models on cloud or containerized environments (Azure, AWS, GCP, Azure Container Apps, etc.).  Develop CI/CD pipelines using tools like GitHub Actions, Docker, MLflow, and Kubernetes for automated training and deployment.  Manage model versioning, logging, rollback, and monitoring. 3. Tooling & Framework Expertise:  Work with ML frameworks and libraries such as PyTorch, TensorFlow, Scikit-learn, HuggingFace, LangChain, OpenAI API, and LLaMA  Use Azure-based tools like Azure Document Intelligence, Azure AI Search, and Azure OpenAI. 4. Data Management:  Automate data ingestion, validation (schema + drift checks), and preprocessing using Python and tools like Great Expectations.  Handle structured, semi-structured, and unstructured data from sources like MongoDB, SQL, PDFs, etc. 5. Collaboration & Communication:  Collaborate with backend engineers, data scientists, and product managers.  Maintain clear documentation and contribute to knowledge-sharing sessions.  Provide technical mentorship to junior developers. Required Qualifications:  Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or equivalent.  2+ years of experience in building and deploying end-to-end ML solutions.  Strong command of Python and ML/DL frameworks (PyTorch, TensorFlow, Sklearn).  Hands-on experience with MLOps, CI/CD, containerization, and cloud deployments.  Strong understanding of Data Structures & Algorithms (DSA).  Familiarity with modern/Traditional AI/LLM applications (e.g., RAG, LLM fine-tuning, chatbot systems).  Experience in building microservices or model APIs using FastAPI or Flask. What Will Make You Stand Out:  You’ve deployed production-grade ML systems with rollback/version control.  You write clean, scalable, and modular code — and understand its lifecycle in the real world.  You have experience integrating AI with business tools like email systems, PDF document processing, and real-time analytics. Why Join Magure Softwares?  Be part of AI solutions that drive real change.  Work in a collaborative, fast-paced, and technically exciting environment.  Opportunities to grow as a full-stack ML engineer, not just a data scientist.  Competitive compensation, high-impact roles, and a company that values innovation. Apply Now: Send your resume to kirti@magureinc.com

Posted 4 days ago

Apply

3.0 years

12 - 30 Lacs

Bengaluru, Karnataka, India

On-site

We are a fast-growing IT services and consulting firm specializing in enterprise software development across finance, healthcare, and e-commerce sectors. Our on-site teams in India deliver mission-critical, Java-based applications that drive digital transformation and robust business outcomes for global clients. Role & Responsibilities Design, develop, and maintain scalable Java applications using Spring Boot and Hibernate frameworks. Implement RESTful APIs and integrate third-party services to support front-end applications and mobile clients. Collaborate with cross-functional teams (QA, DevOps, Business Analysts) to define requirements and deliver end-to-end solutions on schedule. Optimize application performance, troubleshoot production issues, and implement fixes and enhancements. Write clean, well-structured, and unit-tested code following industry best practices and coding standards. Participate in code reviews, design discussions, and mentor junior developers to foster continuous improvement. Skills & Qualifications Must-Have 3+ years of professional experience in Java development (Java 8 or higher). Hands-on expertise with Spring Boot and Hibernate/JPA frameworks. Strong knowledge of RESTful API design and implementation. Proficiency in SQL databases (MySQL, PostgreSQL) and writing optimized queries. Experience with version control (Git) and build tools (Maven or Gradle). Solid understanding of object-oriented design principles and software development life cycle (SDLC). Preferred Familiarity with NoSQL databases (MongoDB, Cassandra) and caching solutions (Redis). Exposure to containerization (Docker) and CI/CD pipelines (Jenkins, GitLab CI). Benefits & Culture Highlights Competitive salary with performance-based incentives and regular appraisals. Collaborative, learning-focused environment with access to technical workshops and certifications. On-site engagement in a centralized office hub with modern amenities and team events. Skills: algorithm,data structure,software development life cycle (sdlc),java,nosql databases,cassandra,hibernate/jpa,sql databases,docker,gradle,mysql,maven,mongodb,ci/cd pipelines (jenkins, gitlab ci),spring boot,git,object-oriented design principles,postgresql,restful apis,caching solutions (redis)

Posted 4 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Need Immediate Joiners Only 10 plus years' experience working in IT Development and/or Operations. 6 years' work experience as Development or Operations or Application support lead Experience in maintaining Multitier architecture applications including Java, Spring Modules, Springboot, SOA, Microservices, Kafka/RabbitMQ and Databases. In-depth understanding of ITSM process, implementation, and best practices 4 plus years of experience on DevOps and AWS Cloud Experience on Docker, Kubernetes, or similar containerization platforms Working experience of SDLC, Agile and Integrations Working knowledge / understanding of Network, Infrastructure and Operating systems Experience with Data Engineering/Analytics and application security, Business / Customer: Ensure robust Delivery. Maintain relationships by connecting with vertical and onsite program managers. Participate in regular governance meetings with customers. Own Transitioning of new engagements. Drive Transformation initiatives. Ensuring the customer presentations/visits are successful. Project/Process: Align and implement the practice/organization defined delivery drivers. Adherence to process based on organization/client standards frameworks and tools. Implement BIC framework within Delivery Unit. Ensure that Operations parameter targets are met such as utilization pyramid/span job rotation ELT induction etc. Ensure timely forecasting is done to meet the future resourcing requirements. Participate in organization and practice level initiatives.

Posted 4 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Position: Solution Architect Location: Mumbai About LRN: LRN is the world's leading dedicated ethics and compliance SaaS company, helping more than 30 million people every year navigate complex regional and global regulatory environments and build ethical, responsible cultures. With over 3,000 clients across the US, EMEA, APAC, and Latin America—including some of the world's most respected and successful brands—we're proud to be the long-term partner trusted to reduce organizational risk and drive principled performance. Named one of Inc Magazine's 5000 Fastest-Growing Companies, LRN is redefining how organizations turn values into action. Our state-of-the-art platform combines intuitive design, mobile accessibility, robust analytics, and industry benchmarking—enabling organizations to create, manage, deliver, and audit ethics and compliance programs with confidence. Backed by a unique blend of technology, education, and expert advisement, LRN helps companies turn their values into real-world behaviors and leadership practices that deliver lasting competitive advantage. About the role: LRN is seeking a seasoned Solution Architect with deep expertise in Java to lead the design and development of enterprise-grade applications. You will be instrumental in shaping our software architecture, driving best practices in Java-based development, and delivering scalable, secure, and high-performing solutions across multiple domains. Working closely with engineering, product, and QA teams, you'll architect backend systems, build RESTful APIs, and guide the integration of modern technologies in a distributed Agile environment. This role demands strong knowledge of service-oriented architecture, object-oriented programming, and cloud platforms like AWS. Experience with front-end frameworks (Angular), databases (PostgreSQL, MongoDB), and AI tools is a plus Requirements What you'll do: Design end-to-end architecture for enterprise web applications. Define technical strategy and ensure alignment with business objectives. Collaborate with stakeholders to convert business requirements into architectural blueprints . Select and recommend Java frameworks, JavaScript Frameworks, tools, and libraries that ensure scalability, performance, and maintainability. Ensure non-functional requirements such as performance, security, availability, and scalability are addressed. Review technical design documents, API contracts, and deployment architectures . Guide and mentor development teams in architectural best practices, coding standards, and design patterns . Evaluate and integrate third-party services, tools, and platforms . Ensure compliance with security and regulatory standards . Collaborate with DevOps teams to enable CI/CD pipelines , automated testing, and deployment strategies. Conduct code reviews and architecture reviews to maintain code quality and reduce technical debt. What we're looking for: Bachelor's or Master's degree in Computer Science, Engineering, or related field. TOGAF 9 Certification (or equivalent Enterprise Architecture certification). Proven experience designing and building large-scale web applications. Expert-level knowledge of Java (Java 21+), Spring Framework / Spring Boot, JPA/Hibernate, REST/GraphQL APIs, NodeJS and Angular Framework. Strong database expertise: Relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis). Proficiency in microservices architecture, containerization (Docker), and cloud platforms (AWS, Azure, or GCP). Experience with messaging systems (Kafka, RabbitMQ, ActiveMQ). Solid understanding of application security and OWASP Top 10 principles. Experience in performance optimization, caching strategies, and load testing. Familiarity with build tools (Maven, Gradle) and version control systems (Git). Excellent problem-solving and communication skills. Benefits Excellent medical benefits, including family plan Paid Time Off (PTO) plus India public holidays Competitive salary Combined Onsite and Remote Work LRN is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role: We are seeking a highly skilled Senior Backend Java Developer with 4 or more years of hands-on experience in designing, developing, and maintaining robust backend systems. The ideal candidate will have strong expertise in Java, Spring, SQL, and unit testing frameworks such as JUnit or Mockito. Familiarity with AI-assisted development tools (like Claude Code, Cursor, GitHub Copilot) is essential. Experience with DevOps practices and cloud platforms like AWS is a plus. Responsibilities: Design, build, and maintain scalable and secure backend services using Java and the Spring framework. Develop and execute unit and integration tests using JUnit, Mockito, or equivalent frameworks. Collaborate with frontend engineers, DevOps, and cross-functional teams to deliver complete and reliable features. Write and optimize SQL queries and manage relational databases to ensure high-performance data operations. Leverage AI-assisted coding tools (e.g., Claude, Cursor, GitHub Copilot) to boost productivity and maintain code quality. Participate in code reviews, ensure adherence to best practices, and mentor junior developers as needed. Troubleshoot, diagnose, and resolve complex issues in production and staging environments. Contribute to technical documentation, architecture discussions, and Agile development processes (e.g., sprint planning, retrospectives). Qualifications: Strong proficiency in Java and object-oriented programming concepts. Hands-on experience with Spring / Spring Boot for building RESTful APIs and backend services. Proficiency in testing frameworks such as JUnit, Mockito, or equivalent. Solid experience in writing and optimizing SQL queries for relational databases (e.g., PostgreSQL, MySQL). Experience using AI-assisted coding tools (e.g.,ClaudeCode,Cursor, GitHub Copilot) in a production environment. Understanding of DevOps tools and practices (CI/CD, Docker, etc.) Experience with AWS services (e.g., EC2, RDS, S3, Lambda) Exposure to containerization and cloud-native development *Hybrid working for Mumbai or Pune.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are hiring Java Developer with Cloud at Noida location. Key Responsibilities Design and implement secure full stack applications using Java and Spring Boot Build and maintain REST APIs with a focus on performance and security Contribute to container-based deployments using Docker and Kubernetes Participate in CI/CD processes and Agile sprint ceremonies Collaborate with teams to improve system architecture and security posture Troubleshoot and resolve development, deployment, and runtime issues TECH STACK YOU’LL WORK WITH Needed (Strong Hands-on Experience Expected) Core Development: Java, Spring Boot, RESTful APIs Build Tools: Maven Security Concepts: Secure coding practices, understanding of authentication and authorization Version Control: Git DevOps: Docker (for containerizing microservices) IDEAL CANDIDATE PROFILE 3–5 years of experience in Java development Architecture & Design: Object-Oriented Design, common design patterns Exposure to cloud platforms (Azure/AWS) and containerization (Docker) Azure: VMs, Functions, AKS, RBAC AWS: EC2, Lambda, IAM, S3 Container Orchestration: Kubernetes (AKS/EKS) Basic understanding of fintech domain or secure systems Willingness to grow into areas like payments, EMV, and tokenization Eager to collaborate, learn, and contribute in a fast-paced environment DOMAIN KNOWLEDGE (Awareness is Sufficient) Card Tokenization Payment Authorization EMV Specification & APDU Format ISO 20022 payment messaging Various digital payment methods (NFC, wallets, cards)

Posted 4 days ago

Apply

2.0 years

0 Lacs

Greater Rajkot Area

On-site

Job Summary: We are seeking a talented and experienced Senior Odoo Developer to join our dynamic team. The ideal candidate will have a solid understanding of Odoo development, customization, and integration with at 2+ year of relevant experience. You will play a key role in designing, implementing, and maintaining Odoo solutions tailored to our business needs, ensuring a seamless workflow and user experience. Experience  2+ Year Location Onsite Rajkot, Gujarat Key Responsibilities ● Develop and customize Odoo modules to meet specific business requirements. ● Perform end-to-end development tasks, including coding, debugging, testing, and deployment. ● Integrate Odoo with third-party applications and APIs. ● Troubleshoot and resolve system issues, ensuring optimal performance and reliability. ● Collaborate with cross-functional teams to gather requirements and provide technical solutions. ● Maintain and upgrade Odoo versions, ensuring compatibility and functionality. ● Write and maintain clear, concise technical documentation. ● Mentor junior developers and contribute to knowledge sharing within the team. Qualifications ● Bachelor’s degree in Computer Science, Software Engineering, or a related field. ● Minimum of 1+ year of professional experience in Odoo development. ● Proficiency in Python programming and a strong understanding of Odoo framework. ● Experience with PostgreSQL and database management. ● Familiarity with front-end technologies such as HTML, CSS, JavaScript, and jQuery. ● Strong understanding of object-oriented programming (OOP) principles. ● Excellent problem-solving and debugging skills. Preferred Qualifications ● Experience with Odoo.sh or other cloud-hosted Odoo environments. ● Knowledge of Docker and containerization technologies. ● Understanding of ERP workflows across different domains (e.g., accounting, inventory, CRM). ● Experience with version control systems like Git. ● Strong communication and collaboration skills.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Experience Minimum 5 years of coding experience in ReactJS (TypeScript), HTML, CSS-Pre-processors, or CSS-in-JS in creating Enterprise Applications with high performance for Responsive Web Applications. Minimum 5 years of coding experience in NodeJS, JavaScript & TypeScript and NoSQL Databases. Developing and implementing highly responsive user interface components using React concepts. (self-contained, reusable, and testable modules and components) Architecting and automating the build process for production, using task runners or scripts Knowledge of Data Structures for TypeScript. Monitoring and improving front-end performance. Banking or Retail domains knowledge is good to have. ·Hands on experience in performance tuning, debugging, monitoring. Technical Skills Excellent knowledge developing scalable and highly available Restful APIs using NodeJS technologies Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token-based authentication (Rest, JWT, OAuth) Possess expert knowledge of task/message queues include but not limited to: AWS, Microsoft Azure, Pushpin and Kafka. Practical experience with GraphQL is good to have. Writing tested, idiomatic, and documented JavaScript, HTML and CSS Experiencing in Developing responsive web-based UI Have experience on Styled Components, Tailwind CSS, Material UI and other CSS-in-JS techniques Thorough understanding of the responsibilities of the platform, database, API, caching layer, proxies, and other web services used in the system Writing non-blocking code, and resorting to advanced techniques such as multi-threading, when needed Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model Documenting the code inline using JSDoc or other conventions Thorough understanding of React.js and its core principles Familiarity with modern front-end build pipelines and tools Experience with popular React.js workflows (such as Flux or Redux or ContextAPI or Data Structures) A knack for benchmarking and optimization Proficient with the latest versions of ECMAScript (JavaScript or TypeScript) Knowledge of React and common tools used in the wider React ecosystem, such as npm, yarn etc Familiarity with common programming tools such as RESTful APIs, TypeScript, version control software, and remote deployment tools, CI/CD tools An understanding of common programming paradigms and fundamental React principles, such as React components, hooks, and the React lifecycle Unit testing using Jest, Enzyme, Jasmine or equivalent framework ·Understanding of linter libraries (TSLINT, Prettier etc) Technical Skills Excellent knowledge in development and testing scalable and highly available Restful APIs / Microservices using JavaScript technologies Able to create end to end Automation test suites using Playwright / Selenium preferably using BDD approach. Practical experience with GraphQL. Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token-based authentication (Rest, JWT, oAuth) Possess expert knowledge of task/message queues including but not limited to: AWS, Microsoft Azure, Pushpin and Kafka Functional Skills Experience in following best Coding, Testing, Security, Unit testing and Documentation standards and practices Experience in Agile methodology. Effectively research and benchmark technology against other best in class technologies. Soft Skills Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft skills and interpersonal skills to interact and present the ideas to Senior and Executive management

Posted 4 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are hiring for Senior QA Engineer role at Noida location. Key Responsibilities Lead the design and implementation of scalable and maintainable test automation frameworks using Java, Cucumber, and Serenity. Review and optimize API test suites (functional, security, load) using REST Assured, Postman, and Gatling. Architect CI/CD-ready testing workflows within Jenkins pipelines, integrated with Docker, Kubernetes, and Cloud deployments (Azure/AWS). Define QA strategies and environment setups using Helm, Kustomize, and Kubernetes manifests. Validate digital payment journeys (tokenization, authorization, fallback) against EMV, APDU, and ISO 20022 specs. Drive technical discussions with cross-functional Dev/DevOps/R&D teams. Mentor junior QAs, conduct code/test reviews, and enforce test coverage and quality standards. IDEAL CANDIDATE PROFILE 4–8 years of hands-on experience in test automation and DevOps. Deep understanding of design patterns, OOP principles, and scalable system design. Experience working in cloud-native environments (Azure & AWS). Knowledge of APDU formats, EMV specs, ISO 20022, and tokenization flows is a strong plus. Exposure to secure payment authorization protocols and transaction validations. TECH STACK YOU’LL WORK WITH Languages & Frameworks: Java, JUnit/TestNG, Serenity, Cucumber, REST Assured Cloud Platforms: Azure (VMs, Functions, AKS), AWS (Lambda, EC2, S3, IAM) DevOps/Containerization: Jenkins, Docker, Kubernetes (AKS/EKS), Helm, Kustomize, Maven API & Performance Testing: Postman, Gatling Proficient in test environment provisioning and pipeline scripting Domain Knowledge Required Deep understanding of card tokenization, EMV standards, and APDU formats Experience with payment authorization flows across methods (credit, debit, wallets, NFC) Familiarity with ISO 20022 and other financial messaging standards

Posted 4 days ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Designation Associate Architect Experience- 7 to 12 years Job Location- Ahmedabad, Gujarat(Work from the office) How you should be? We are looking for a talented and experienced Architect to join our team. The ideal candidate will have 7+ years of experience. What you will do? • Collaborate with senior architects and project stakeholders to understand project requirements and objectives. • Assist in the design and development of architectural solutions that meet the needs of our clients. • Create architectural diagrams, models, and documentation to communicate design concepts and decisions. • Research emerging technologies and trends in architecture and recommend best practices. • Participate in architectural reviews and provide constructive feedback to team members. • Assist in the evaluation and selection of technology platforms, frameworks, and tools. • Work closely with development teams to ensure architectural alignment and adherence to design principles. • Support the implementation and deployment of architectural solutions, including troubleshooting and issue resolution. • Provide technical guidance and mentorship to junior team members. • Stay up-to-date with industry standards and regulations related to architecture and security. What we are looking for? • Computer Science, Engineering, or a related field. • Must have proven experience with .NET Core and Angular. • Strong understanding of software architecture principles and design patterns. • Proficiency in architectural modelling tools such as Enterprise Architect, ArchiMate, or similar. • Excellent communication and collaboration skills. • Ability to work effectively in a team environment and independently. • Strong analytical and problem-solving skills. • Familiarity with Agile methodologies. • Knowledge of Client Side Frameworks, Node, SQL, C#, Web API • Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform is a plus. • Knowledge of enterprise integration patterns and technologies. • Experience with microservices architecture and containerization technologies (e.g., Docker, Kubernetes). • Familiarity with architectural governance frameworks and processes. • Experience working on large-scale and complex projects. • Preferrable if any Certification like e.g. TOGAF, any cloud architect level certificate.

Posted 4 days ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Summary We are seeking a skilled Data Engineer to join our team. The candidate will be responsible for designing, building, and maintaining robust data infrastructure that powers PocketFM's recommendation systems, analytics, and business intelligence capabilities. This role offers an exciting opportunity to work with large-scale data systems that directly impact millions of users' audio entertainment experience. Key Responsibilities Data Infrastructure & Pipeline Development Design, develop, and maintain scalable ETL/ELT pipelines to process large volumes of user interaction data, content metadata, and streaming analytics Build and optimize data warehouses and data lakes to support both real-time and batch processing requirements Implement data quality monitoring and validation frameworks to ensure data accuracy and reliability Develop automated data ingestion systems from various sources including mobile apps, web platforms, and third-party integrations Analytics & Reporting Infrastructure Create and maintain data models that support business intelligence, user analytics, and content performance metrics Build self-service analytics platforms enabling stakeholders to access insights independently Implement real-time dashboards and alerting systems for key business metrics Support A/B testing frameworks and experimental data analysis requirements Data Architecture & Optimization Collaborate with software engineers to optimize database performance and query efficiency Design data storage solutions that balance cost, performance, and accessibility requirements Implement data governance practices including data cataloging, lineage tracking, and access controls Ensure GDPR and data privacy compliance across all data systems Collaboration & Support Work closely with data scientists, product managers, and analysts to understand data requirements Participate in code reviews and maintain high standards of code quality and documentation Mentor junior team members and contribute to knowledge sharing initiatives Required Qualifications Technical Skills Programming Languages: Proficiency in Python, SQL, and at least one of: Java, Scala, or Go Big Data Technologies: Hands-on experience with Apache Spark, Kafka, Airflow, and distributed computing frameworks Cloud Platforms: Strong experience with AWS, GCP, or Azure data services (S3, BigQuery, Redshift, etc.) Database Systems: Expertise in both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, Redis) databases Data Warehousing: Experience with modern data warehouse solutions like Snowflake, BigQuery, or Databricks Containerization: Proficiency with Docker and Kubernetes for deploying data applications Experience Requirements 2-4 years of experience in data engineering or related roles Proven track record of building and maintaining production data pipelines at scale Experience with streaming data processing and real-time analytics systems Strong understanding of data modeling, schema design, and data architecture principles Experience with version control systems (Git) and CI/CD pipelines Preferred Qualifications (Good to Have) Machine Learning & Model Operations Model Deployment: Experience deploying machine learning models to production environments using frameworks like MLflow, Kubeflow, or SageMaker MLOps Practices: Familiarity with ML pipeline automation, model versioning, and continuous integration for machine learning Advanced Technical Skills Experience with Vector Database, graph databases and knowledge graphs Understanding of data mesh architecture and domain-driven data design Experience with data privacy and security implementations

Posted 4 days ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Java Full Stack Development Good to have skills : AWS Architecture, API Management Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly skilled and experienced Senior Software Engineer with a hybrid skill set combining software development and testing expertise. The candidate must have a strong background in designing, developing, maintaining, and testing scalable SaaS solutions in a cloud environment. As a Senior Software Engineer, you will play a critical role in driving the technical direction of our projects and ensuring the highest quality of our software products. Roles and responsibilities: Software Development: Design, develop, and maintain high-quality software solutions using Java(Spring Boot) technologies. Software Testing: Create test scenarios and design, develop, and execute corresponding automated tests to ensure software functions per specifications. Cloud Infrastructure: Utilize AWS services to architect and manage scalable, secure, and cost-effective cloud infrastructure for SaaS applications. Technical Leadership: Provide technical leadership and mentorship to junior engineers, ensuring best practices in software development and cloud architecture. Collaboration: Work closely with cross-functional teams, including product management, QA, and DevOps, to deliver robust and reliable software solutions. Code Reviews: Conduct thorough code reviews to ensure code quality, performance, and maintainability. Continuous Improvement: Stay current with industry trends and emerging technologies and actively contribute to continuous improvement initiatives. Education qualifications: Education: Master’s degree (preferred) or bachelor’s degree in computer science, engineering, or a related field (or equivalent experience). Experience: Minimum of 4+ years of experience in software development and testing, with at least 3 years focusing on SaaS applications. Technical experience & Professional attributes: Proficient in Java (Spring Boot), with strong experience in AWS services (EC2, S3, Lambda, RDS, CloudFormation) or equivalent Azure/GCP experience, Knowledge of RESTful API, OpenAPI design and development, and database technologies (SQL and NoSQL). Familiarity with containerization technologies (Docker, Kubernetes) and CI/CD pipelines for automated, reliable software delivery. Familiarity with front-end technologies (e.g., Angular, React) is a plus. Familiarity with event driven architecture (e.g., Pulsar, SNS/SQS) and deployment Infrastructure as Code (eg. Helm and Argo CD) is a plus. Ability to develop and maintain test automation suites and frameworks. Ability to define test strategies and scenarios, leveraging industry-standard QA testing methodologies and capable of developing automated tests, leveraging API test automation REST (request library/component parameterization) and UI test automation using Selenium or similar tool. Demonstrated problem-solving skills with a track record of tackling complex technical challenges and delivering innovative solutions. Excellent communication and interpersonal skills, with the ability to thrive in a fast-paced, collaborative environment. Strong organizational, presentation, and facilitation skills, with experience mentoring and guiding less experienced developers. Commitment to code quality and best practices, including conducting code reviews and performing Root Cause Analysis (RCA) for critical issues. Proactive in identifying and implementing opportunities for process improvement, enhancing efficiency and productivity. Results-oriented and customer-focused, with a deep understanding of business objectives and customer needs, and a commitment to delivering high-quality products. Agile mentality, staying abreast of emerging technologies and continuously learning and adapting to changing requirements and priorities. Our Core Values: Here are the Winning Way behaviors that all employees embrace every day: Own the Outcome: Commit to milestones and demonstrate unwavering support for team decisions. If you are unsure, ask Work with Purpose: Foster a ""We Can"" mindset where results outweigh effort, everyone understands how their roles contribute to team outcomes. Act with Urgency: Adopt an agile mentality with a focus on quick iterations and resilience. Communicate with Clarity: Be clear, concise, and actionable. Embrace constructive feedback. Drive to Decision: Make decisions swiftly with defined deadlines and accountability. Additional Information: Experience with CI/CD pipelines and DevOps practices. Knowledge of microservices architecture., 15 years full time education

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Java Full Stack Development Good to have skills : AWS Architecture, API Management Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly skilled and experienced Senior Software Engineer with a hybrid skill set combining software development and testing expertise. The candidate must have a strong background in designing, developing, maintaining, and testing scalable SaaS solutions in a cloud environment. As a Senior Software Engineer, you will play a critical role in driving the technical direction of our projects and ensuring the highest quality of our software products. Roles and responsibilities: Software Development: Design, develop, and maintain high-quality software solutions using Java(Spring Boot) technologies. Software Testing: Create test scenarios and design, develop, and execute corresponding automated tests to ensure software functions per specifications. Cloud Infrastructure: Utilize AWS services to architect and manage scalable, secure, and cost-effective cloud infrastructure for SaaS applications. Technical Leadership: Provide technical leadership and mentorship to junior engineers, ensuring best practices in software development and cloud architecture. Collaboration: Work closely with cross-functional teams, including product management, QA, and DevOps, to deliver robust and reliable software solutions. Code Reviews: Conduct thorough code reviews to ensure code quality, performance, and maintainability. Continuous Improvement: Stay current with industry trends and emerging technologies and actively contribute to continuous improvement initiatives. Education qualifications: Education: Master’s degree (preferred) or bachelor’s degree in computer science, engineering, or a related field (or equivalent experience). Experience: Minimum of 4+ years of experience in software development and testing, with at least 3 years focusing on SaaS applications. Technical experience & Professional attributes: Proficient in Java (Spring Boot), with strong experience in AWS services (EC2, S3, Lambda, RDS, CloudFormation) or equivalent Azure/GCP experience, Knowledge of RESTful API, OpenAPI design and development, and database technologies (SQL and NoSQL). Familiarity with containerization technologies (Docker, Kubernetes) and CI/CD pipelines for automated, reliable software delivery. Familiarity with front-end technologies (e.g., Angular, React) is a plus. Familiarity with event driven architecture (e.g., Pulsar, SNS/SQS) and deployment Infrastructure as Code (eg. Helm and Argo CD) is a plus. Ability to develop and maintain test automation suites and frameworks. Ability to define test strategies and scenarios, leveraging industry-standard QA testing methodologies and capable of developing automated tests, leveraging API test automation REST (request library/component parameterization) and UI test automation using Selenium or similar tool. Demonstrated problem-solving skills with a track record of tackling complex technical challenges and delivering innovative solutions. Excellent communication and interpersonal skills, with the ability to thrive in a fast-paced, collaborative environment. Strong organizational, presentation, and facilitation skills, with experience mentoring and guiding less experienced developers. Commitment to code quality and best practices, including conducting code reviews and performing Root Cause Analysis (RCA) for critical issues. Proactive in identifying and implementing opportunities for process improvement, enhancing efficiency and productivity. Results-oriented and customer-focused, with a deep understanding of business objectives and customer needs, and a commitment to delivering high-quality products. Agile mentality, staying abreast of emerging technologies and continuously learning and adapting to changing requirements and priorities. Our Core Values: Here are the Winning Way behaviors that all employees embrace every day: Own the Outcome: Commit to milestones and demonstrate unwavering support for team decisions. If you are unsure, ask Work with Purpose: Foster a ""We Can"" mindset where results outweigh effort, everyone understands how their roles contribute to team outcomes. Act with Urgency: Adopt an agile mentality with a focus on quick iterations and resilience. Communicate with Clarity: Be clear, concise, and actionable. Embrace constructive feedback. Drive to Decision: Make decisions swiftly with defined deadlines and accountability. Additional Information: Experience with CI/CD pipelines and DevOps practices. Knowledge of microservices architecture., 15 years full time education

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Egen is a fast-growing and entrepreneurial company with a data-first mindset. We bring together the best engineering talent working with the most advanced technology platforms, including Google Cloud and Salesforce, to help clients drive action and impact through data and insights. We are committed to being a place where the best people choose to work so they can apply their engineering and technology expertise to envision what is next for how data and platforms can change the world for the better. We are dedicated to learning, thrive on solving tough problems, and continually innovate to achieve fast, effective results. Job Summary We are seeking a talented and passionate Python Developer to join our dynamic team. In this role, you will be instrumental in designing, developing, and deploying scalable and efficient applications on the Google Cloud Platform. You will have the opportunity to work on exciting projects and contribute to the growth and innovation of our products/services. You will also mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, develop, and maintain robust and scalable applications using Python Build and consume RESTful APIs using FastAPI Deploy and manage applications on the Google Cloud Platform (GCP) Collaborate effectively with cross-functional teams, including product managers, designers, and other engineers Write clean, well-documented, and testable code Participate in code reviews to ensure code quality and adherence to best practices Troubleshoot and debug issues in development and production environments Create clear and effective documents Stay up-to-date with the latest industry trends and technologies Assist the junior team members Required Skills And Experience 5+ years of relevant work experience in software development using Python Solid understanding and practical experience with the FastAPI framework Hands-on experience with the Google Cloud Platform (GCP) and its core services Experience with CI/CD pipelines Ability to write unit test cases and execute them Able to discuss and propose architectural changes Knowledge of security best practices Strong problem-solving and analytical skills Excellent communication and collaboration abilities Bachelor’s degree in Computer Science or a related field (or equivalent practical experience) Optional Skills (a Plus) Experience with any front-end framework such as Angular, React, Vue.js, etc Familiarity with DevOps principles and practices Experience with infrastructure-as-code tools like Terraform Knowledge of containerization technologies such as Docker and Kubernetes

Posted 4 days ago

Apply

3.0 years

0 Lacs

India

On-site

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. About The Role As a Full‑Stack Developer, you’ll architect, develop, and deploy full-stack systems using Java (Spring Boot) and Python (Flask/Django or scripting) . In this role, you’ll drive end-to-end automation via CI/CD pipelines , containerization, and cloud infrastructure—working closely with DevOps and project teams for scalable, secure application delivery Key Responsibilities Build and maintain backend microservices in Java (Spring Boot); implement automation/data transformation scripts in Python for analytics or workflow processing. Develop dynamic, responsive user interfaces using modern frameworks (typically React.js or Angular), integrating with APIs via REST/GraphQL Design, implement, and manage CI/CD pipelines using tools such as Jenkins, Azure DevOps, Terraform or GitHub Actions for fully automated deployment and testing. Containerize applications with Docker, orchestrate via Kubernetes or OpenShift; collaborate with DevOps teams to manage staging/production environments. Integrate with relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases; handle schema design, indexing, and query optimization. Implement secure authentication/authorization (OAuth2, JWT, RBAC), privacy best practices, and compliance workflows (HIPAA, DevSecOps) where applicable. Monitor system performance and stability using tools like Prometheus, Grafana, ELK Stack, or Datadog; troubleshoot production issues proactively Participate in Agile ceremonies (sprint planning, backlog grooming, code reviews) and collaborate with project stakeholders across technical and business teams. Basic Qualifications BE / B’Tech / MCA + 3+ years of professional software development experience, with strong proficiency in Java (Spring Boot) and at least 2+ years writing backend services or automation in Python Front‑end experience using React.js or Angular, including modern JS features, hooks/state, and UI libraries like Material‑UI or Chakra UI. Proven experience building RESTful APIs and deploying microservices in cloud environments (AWS, Azure, GCP). Practical knowledge of DevOps tooling: Docker, Kubernetes, CI/CD platforms (Jenkins, Azure DevOps, GitHub Actions), and scripting in Python or Bash. Hands-on with SQL and NoSQL databases, version control with Git, and working in Agile environments using tools like Jira or Confluence. Preferred Skills Prior experience in U.S. healthcare, insurance, or compliance-heavy domains; familiarity with standards like HL7 or FHIR is a plus Experience with Terraform, Helm, or other Infrastructure-as-Code (IaC) tools, and with security frameworks like OWASP, DevSecOps, or Kubernetes hardening Knowledge of event-driven architectures (e.g. Kafka, AWS SQS), GraphQL, WebSockets, or real-time data exchange frameworks. Background in test automation frameworks (JUnit, PyTest, Cypress, Selenium) and code quality tools (SonarQube, static analysis). This position is being hired for a customer of Gruve. Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

Posted 4 days ago

Apply

8.0 years

0 Lacs

India

On-site

The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. We are looking for a Full Stack-React.js Developer. Apply with an updated CV at sony.pathak@aptita.com Lead Engineer- React Notice period: Immediate to 30 days Experience range: 8 years Must have exp: Reactjs, node js Responsibilities Education and experience: ○ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. ○ Minimum of 8 years of professional experience in full-stack development. ● Technical Requirements: ○ Proficiency in JavaScript, including ES6 and beyond, asynchronous programming, closures, and prototypal inheritance. ○ Expertise in modern front-end frameworks/libraries (React, Vue.js). ○ Strong understanding of HTML5, CSS3, and pre-processing platforms like SASS or LESS. ○ Experience with responsive and adaptive design principles. ○ Knowledge of front-end build tools like Webpack, Babel, and npm/yarn. ○ Proficiency in Node.js and frameworks like Express.js, Koa, or NestJS. ○ Experience with RESTful API design and development. ○ Experience with Serverless.(Lambda, CloudFunctions) ○ Experience with GraphQL. ○ Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis). ○ Experience with caching & search frameworks. (Redis, ElasticSearch) ○ Proficiency in database schema design and optimization. ○ Experience with containerization tools (Docker, Kubernetes). ○ Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). ○ Knowledge of cloud platforms (AWS, Azure, Google Cloud). ○ Proficiency in testing frameworks and libraries (Jest, vitest, Cypress, Storybook). ○ Strong debugging skills using tools like Chrome DevTools, Node.js debugger. ○ Expertise in using Git and platforms like GitHub, GitLab, or Bitbucket. ○ Understanding of web security best practices (OWASP). ○ Experience with authentication and authorization mechanisms (OAuth, JWT). ○ System Security, Scalability, System Performance experience Qualifications Bachelor's degree or equivalent experience in Computer Science or related field Development experience with programming languages SQL database or relational database skills

Posted 4 days ago

Apply

0 years

0 Lacs

India

On-site

Who We Are Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Position Summary We are seeking an experienced Azure DevOps Administrator / Developer to support application development, delivery, and platform operations for enterprise data initiatives. You will play a critical role in enabling modern cloud-based solutions that span data integration, governance, privacy, analytics, and platform services . This role involves collaborating with cross-functional teams to ensure the seamless design, deployment, automation, and monitoring of infrastructure and applications across Microsoft Azure and enterprise platforms. Responsibilities Design, implement, and maintain CI/CD pipelines in Azure DevOps for application and data engineering projects. Automate provisioning, scaling, and monitoring of Azure infrastructure and services. Support deployment and configuration of enterprise data platforms (e.g., Databricks, Power BI, Collibra, Informatica MDM, OneTrust, PoolParty). Implement infrastructure-as-code using ARM templates, Terraform, or Bicep. Manage release processes and ensure smooth deployments across environments. Monitor system health, performance, and costs, proactively addressing issues. Administer role-based access control (RBAC), policies, and compliance frameworks in Azure. Integrate and support data pipelines, ETL workflows, and application components. Implement security best practices, including encryption, auditing, and monitoring. Collaborate with developers, data engineers, and governance teams to optimize platform reliability and performance. Maintain documentation for environments, deployment processes, and operational procedures. Required Skills & Experience Cloud Expertise: Strong experience with Microsoft Azure cloud services, including Azure Data Services, Azure DevOps, and resource governance. CI/CD & Automation: Hands-on expertise with Azure DevOps Pipelines, Git, build/release automation, and Infrastructure-as-Code (IaC). Scripting & Development: Proficiency in PowerShell, Bash, Python, or .NET for automation and tooling. Containerization & Orchestration: Experience with Docker and Kubernetes (AKS preferred). Monitoring & Logging: Experience with Azure Monitor, Log Analytics, and Application Insights. Security & Compliance: Strong understanding of RBAC, data classification, and cloud security principles. Collaboration Tools: Familiarity with Jira, Agile delivery, and cross-team collaboration. Preferred Skills (Nice To Have) Experience with Databricks, Informatica MDM, Collibra, or Microsoft Purview. Exposure to enterprise integration patterns and ESB (Enterprise Service Bus) solutions. Familiarity with BI platforms like Power BI and Tableau. Understanding of data governance and marketplace applications. Skills: cloud security,powershell,application insights,microsoft azure,iaas,powershell scripting,rbac,python,terraform,bash,log analytics,data classification,kubernetes,devops,ci/cd,aks,.net,ci/cd pipelines,docker,azure data services,azure monitor,azure devops,infrastructure-as-code

Posted 4 days ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Senior Python Developer – AI/ML Platforms Experience: 8+ Years Location: Remote Employment Type: Full-time About the Role: Our business is strategically building advanced AI/ML platforms for enterprise-scale solutions, and we are seeking senior software engineers to accelerate feature delivery. As part of an existing agile scrum team, you will collaborate closely with Optum Labs engineering leaders to execute and deliver high-impact features aligned with our AI/ML vision. Key Responsibilities: Develop, optimize, and maintain robust Python-based AI/ML platform components . Design and implement scalable solutions leveraging Azure AI services, GenAI models, LLMs, and agentic frameworks . Collaborate with data scientists, MLOps engineers, and architects to integrate AI-driven functionalities into enterprise systems. Participate actively in agile ceremonies, contributing to backlog refinement, sprint planning, and code reviews. Ensure high standards of code quality, performance, and security in production environments. Troubleshoot, debug, and resolve complex production and integration issues. Continuously explore and integrate emerging technologies in generative AI and LLM-based solutions to improve product capabilities. Required Skills & Qualifications: 8+ years of professional software development experience with a strong focus on Python . Proven expertise in Azure AI services, including deployment, scaling, and integration. Hands-on experience with Generative AI (GenAI) technologies, LLMs (OpenAI, Azure OpenAI, Hugging Face, etc.), and agentic frameworks . Strong understanding of AI/ML solution design, APIs, and integration patterns. Solid grasp of cloud-native application architecture, microservices, and distributed systems. Experience in agile development methodologies with CI/CD pipelines. Strong problem-solving skills with the ability to work in a collaborative, fast-paced environment. Preferred Skills: Knowledge of vector databases , embeddings, and semantic search techniques. Familiarity with MLOps tools and processes. Experience with containerization (Docker, Kubernetes) . Background in healthcare or enterprise-scale AI solutions is a plus.

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. About The Role As a Full‑Stack Developer, you’ll architect, develop, and deploy full-stack systems using Java (Spring Boot) and Python (Flask/Django or scripting) . In this role, you’ll drive end-to-end automation via CI/CD pipelines , containerization, and cloud infrastructure—working closely with DevOps and project teams for scalable, secure application delivery Key Responsibilities Build and maintain backend microservices in Java (Spring Boot); implement automation/data transformation scripts in Python for analytics or workflow processing. Develop dynamic, responsive user interfaces using modern frameworks (typically React.js or Angular), integrating with APIs via REST/GraphQL Design, implement, and manage CI/CD pipelines using tools such as Jenkins, Azure DevOps, Terraform or GitHub Actions for fully automated deployment and testing. Containerize applications with Docker, orchestrate via Kubernetes or OpenShift; collaborate with DevOps teams to manage staging/production environments. Integrate with relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases; handle schema design, indexing, and query optimization. Implement secure authentication/authorization (OAuth2, JWT, RBAC), privacy best practices, and compliance workflows (HIPAA, DevSecOps) where applicable. Monitor system performance and stability using tools like Prometheus, Grafana, ELK Stack, or Datadog; troubleshoot production issues proactively Participate in Agile ceremonies (sprint planning, backlog grooming, code reviews) and collaborate with project stakeholders across technical and business teams. Basic Qualifications BE / B’Tech / MCA + 5+ years of professional software development experience, with strong proficiency in Java (Spring Boot) and at least 2+ years writing backend services or automation in Python Front‑end experience using React.js or Angular, including modern JS features, hooks/state, and UI libraries like Material‑UI or Chakra UI. Proven experience building RESTful APIs and deploying microservices in cloud environments (AWS, Azure, GCP). Practical knowledge of DevOps tooling: Docker, Kubernetes, CI/CD platforms (Jenkins, Azure DevOps, GitHub Actions), and scripting in Python or Bash. Preferred Skills Prior experience in U.S. healthcare, insurance, or compliance-heavy domains; familiarity with standards like HL7 or FHIR is a plus Experience with Terraform, Helm, or other Infrastructure-as-Code (IaC) tools, and with security frameworks like OWASP, DevSecOps, or Kubernetes hardening Knowledge of event-driven architectures (e.g. Kafka, AWS SQS), GraphQL, WebSockets, or real-time data exchange frameworks. Background in test automation frameworks (JUnit, PyTest, Cypress, Selenium) and code quality tools (SonarQube, static analysis). This position is being hired for a customer of Gruve. Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Cloud Solution Architect Experience: 7+ Years Location: Kochi Notice Period: Immediate Joiners Job Description: We are seeking an experienced Cloud Solution Architect to lead enterprise-scale cloud transformation and modernization initiatives. The role involves designing secure, scalable, high-performance solutions on Microsoft Azure, aligning cloud strategies with business objectives, and guiding teams through cloud adoption. Key Responsibilities: Design and implement end-to-end Azure architectures for enterprise workloads. Lead cloud migration, modernization, and greenfield deployment projects using Azure services. Define and enforce governance, security, and compliance across Azure environments. Implement Infrastructure as Code (IaC) using Terraform, Bicep, or ARM templates. Collaborate with DevOps teams for CI/CD pipeline integration and automation. Optimize performance, availability, and cost of cloud solutions. Mentor developers, cloud engineers, and solution teams on Azure best practices. Work with stakeholders and leadership to translate business needs into technical solutions. Must-Have Skills: 8+ years of IT experience, with at least 5 years in Azure solution architecture. Strong hands-on experience with Azure compute, storage, networking, identity, and security services. Expertise in hybrid and cloud-native architecture design. Deep understanding of Azure networking, security controls, and governance (RBAC, Policies, Key Vault). Proficiency in DevOps tools including Azure DevOps, GitHub, and YAML pipelines. Experience in scripting (PowerShell, Azure CLI) and Infrastructure as Code (Terraform, Bicep, ARM templates). Strong communication and documentation skills. Preferred Skills: Familiarity with architecture frameworks like TOGAF, Microsoft CAF, or Well-Architected Framework. Experience in multi-region deployments, high availability setups, and disaster recovery. Knowledge of Azure API Management, Service Bus, Event Grid, and Logic Apps. Exposure to containerization platforms like Azure Kubernetes Service (AKS) and Docker. Working knowledge of Azure Synapse, Data Factory, and analytics services. Experience in BFSI, Healthcare, or Government domains. Experience conducting technical workshops, architecture governance, and solution reviews. Certification Requirement: Microsoft Certified: Azure Solutions Architect Expert (Must be active).

Posted 4 days ago

Apply

10.0 years

1 - 15 Lacs

Dehradun, Uttarakhand, India

On-site

Job Title: UPI Switch Architect Location: Dehradun Department: Payments / Technology Experience Required: 10+ years in payments technology, with 5+ years in real-time payment switch or UPI systems Role Overview We are seeking an experienced UPI Switch Architect to design, implement, and optimize the architecture for our real-time UPI payment switch. The role covers both UPI Acquirer Switch and UPI Issuer Switch design, ensuring seamless interoperability between NPCI and internal systems. The ideal candidate will have deep technical expertise in UPI, ISO 8583/ISO 20022 protocols, high-availability systems, and real-time payment routing, as well as a strong grasp of issuer and acquirer-side transaction processing, UPI push/pull flows, UPI collect, and QR-based payment mechanisms. Key Responsibilities Architecture & Design Define and own the UPI Acquirer and Issuer Switch architecture including transaction flow, message orchestration, dispute handling, and settlement processing. Design scalable, fault-tolerant, and low-latency payment processing systems capable of handling millions of TPS. Create architecture blueprints for core switch, API gateways, message queues, and integration points with NPCI and internal banking systems. Define disaster recovery (DR) and high availability (HA) architecture with zero downtime principles. Domain Expertise & Compliance Ensure architecture complies with NPCI guidelines, RBI regulations, and PCI-DSS standards. Implement security best practices including encryption, key management, and secure API design. Incorporate features for UPI enhancements like UPI Lite, UPI AutoPay, recurring mandates, and future NPCI initiatives. Technology Leadership Guide engineering teams in technology stack selection, coding standards, and performance optimization for both acquirer and issuer environments. Review system performance metrics and continuously enhance throughput, resiliency, and monitoring capabilities. Stay updated on UPI 2.0/3.0, new NPCI circulars, and evolving compliance requirements for acquirers and issuers. Integration & Collaboration Lead integration efforts between core banking, acquirer payment gateways, issuer authorization systems, fraud detection, and settlement systems. Work with infrastructure and DevOps teams to ensure smooth CI/CD pipelines and containerized deployments. Coordinate with NPCI technical teams for certification, testing, and go-live readiness. Technical Expertise Required Skills & Experience Strong experience with real-time payment systems, UPI acquirer/issuer switch, ISO 8583/ISO 20022 messaging. Deep understanding of Java for backend services, including Java microservices architecture, multithreading, and Kafka-based event streaming. Proficiency in API frameworks and event-driven architecture. Knowledge of RDBMS (Oracle/PostgreSQL) and NoSQL (Cassandra/Redis/Mongo) for highspeed transactions. Hands-on with Linux systems, containerization (Docker, Kubernetes), and cloud-native deployments. Domain Knowledge End-to-end understanding of UPI Acquirer Switch flows (merchant acquiring, transaction routing, merchant settlement). End-to-end understanding of UPI Issuer Switch flows (authorization, debit/credit posting, reversal handling, dispute management). Strong knowledge of UPI Pull transaction flows (merchant/payer-initiated collect requests). Strong knowledge of UPI Push transaction flows (payer-initiated payments to merchant or P2P). Deep knowledge of UPI Collect request flows and payer bank authorization handling. Experience with UPI Intent QR (deep linking from merchant apps) and UPI Dynamic QR (realtime QR code generation). Familiarity with NPCI’s certification processes and acquirer/issuer compliance requirements. Soft Skills Strong analytical and problem-solving skills for high-pressure, time-sensitive scenarios. Excellent documentation, communication, and stakeholder management abilities. Ability to mentor development teams and influence architecture decisions. Skills: upi dynamic qr,nosql (cassandra/redis/mongo),api framework,upi collect request flows,api frameworks,upi push transaction flows,npci certification processes,java,upi pull transaction flows,multithreading,design,kafka,upi systems,upi intent qr,java microservices architecture,payments technology,iso 8583,linux,cloud-native deployments,rdbms (oracle/postgresql),docker,real-time payment systems,iso 20022 messaging,event-driven architecture,upi acquirer/issuer switch,architecture,payment switchy,kubernetes

Posted 4 days ago

Apply

3.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : Cloud Engineer (Certified) Experience Required : 3 to 4 Years Location : Pune or Mumbai Employment Type : Full-time Job Summary : We are seeking a skilled & proactive Cloud Engineer with 3-4 years of hands-on experience in designing, deploying, and managing cloud infrastructure, primarily on Amazon Web Services (AWS) & Google Cloud Platform (GCP ) and holding Professional Architect or DevOps Engineer – Professional certificate. Applicants need to have an active AWS / GCP Pro Level Certificate Key Responsibilities : Design and implement scalable, secure, and cost-optimized cloud solutions on AWS and GCP Manage infrastructure as code using tools like Terraform, AWS CloudFormation, or CDK. Python or go scripting programming Knowledge Proficient in Kubernetes operations Monitor system performance, troubleshoot issues, and ensure high availability. Implement CI/CD pipelines and automate deployment processes. Work closely with development, security, and operations teams to ensure smooth cloud operations. Ensure best practices for cloud security, backup, and disaster recovery are followed. Required Skills & Qualifications : 3-4 years of proven experience as a Cloud Engineer working in AWS/GCP environments. Mandatory : Professional Certification (e.g. Solutions Architect – Professional or DevOps Engineer – Professional). Strong understanding of cloud security principles and networking (VPC, Subnets, VPN, etc.). Experience with Linux/Unix systems and scripting (Bash, Python, etc.). Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Experience with monitoring tools (CloudWatch, Prometheus, Grafana, etc.). Experience with containerization (Docker, Kubernetes, ECS, or EKS). Exposure to other cloud platforms like GCP or Azure. Knowledge of cost optimization and governance in multi-account AWS setups. Important Note : Being an extremely good communicator is a must. Tech Professional Certification is mandatory. Ready to travel for client meetings to drive solution messaging and demos. Ready for global engagements. #CloudEngineer #AWSJobs #GCPJobs #DevOpsIndia #Terraform #Kubernetes #HiringNow #StartupJobsIndia #PuneJobs #MumbaiJobs #Infimatrix #CloudJobs #DevOpsEngineer #AWSCommunity #GCPCloud #CloudCareers #InfrastructureAsCode #CI_CD #CloudSecurity #SREJobs #CloudComputing #HiringCloudEngineers #ITJobsIndia #TechJobsIndia #JobAlertIndia #NowHiring #TechHiring #MumbaiTechJobs #PuneTechJobs #EngineeringJobs #StartupCareers #TechStartupsIndia #AWSCertified #GCPProfessional #CertifiedCloudEngineer #DevOpsCareers #JoinOurTeam #GrowWithUs #TechTalent #InfimatrixCareers

Posted 4 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Exp: 7+ Notice Period: Immediate to 15days Location: Chennai Shift :2pm to 11pm Skills:React/React Native .NET/C# ,Azure or AWS services,CI/CD processes,system architecture or system design . FULL STACK TECH LEAD: Responsibilities: Technical Leadership: Lead a team of full stack developers in designing, developing, and implementing high-quality software solutions. Provide technical guidance and mentorship to team members. Collaborate with cross-functional teams to ensure alignment with business objectives. Full Stack Development: Hands-on development using a variety of technologies, including jQuery, Angular, React/React Native (Mandatory), Vue.js, Node.js, and .NET/C# (Mandatory) Design and implement scalable and maintainable code. Ensure the integration of front-end and back-end components for seamless functionality. Project Management: Oversee project timelines, ensuring milestones are met and projects are delivered on time. Work closely with project managers to define project scope and requirements. Code Review and Quality Assurance: Conduct code reviews to maintain code quality and ensure adherence to best practices. Implement and enforce coding standards and development processes. Communication: Prioritize effective communication within the team and with stakeholders. Act as a liaison between technical and non-technical teams to ensure understanding and alignment of project goals. Qualifications: Bachelor’s degree in computer science or a related field. 10-15 years of relevant professional and hands-on software development experience. Proven experience in full stack development with expertise in jQuery, Angular, React/React Native (Mandatory), Vue.js, Node.js, and .NET/C# (Mandatory). Strong understanding of software architecture and design principles. i.e 2 Tier/3 Tier and various system architecture and design principles. Proven experience in database technologies such as SQL Server, MySQL, or MongoDB and No SQL technologies Hands-on experience in CI/CD pipeline and various deployment tools like GitHub, Maven, Jenkins…etc Excellent communication and interpersonal skills. Experience in leading and mentoring development teams. Additional Skills (Preferred): Familiarity with cloud platforms such as AWS, Azure (Preferred), or Google Cloud. Knowledge of containerization and orchestration tools (Docker, Kubernetes).

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies