Jobs
Interviews

6 Containerized Deployments Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

chandigarh

On-site

You have 5+ years of backend or full-stack development experience, including a minimum of 3 years specializing in Generative and Agentic AI. Your expertise lies in creating APIs utilizing REST, GraphQL, and gRPC, emphasizing on performance, versioning, and security. Proficiency in Python is required, with additional knowledge in TypeScript/Node.js, Go, or Java. You should possess a deep understanding of LLM integration and orchestration (OpenAI, Claude, Gemini, Mistral, LLaMA, etc.). Moreover, you must have hands-on experience with frameworks like LangChain, LlamaIndex, CrewAI, and Autogen. Familiarity with vector search, semantic memory, and retrieval-based augmentation tools such as FAISS or Qdrant is preferred. A solid grasp of cloud infrastructure (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes) is also essential for this role.,

Posted 1 day ago

Apply

8.0 - 15.0 years

0 Lacs

maharashtra

On-site

You will be responsible for developing and maintaining scalable web applications using .NET Core (C#) and Angular. This includes implementing RxJS for managing reactive data streams and Route Guards for securing routes in Angular. You will create custom pipes, implement pagination without external UI libraries, and ensure optimal performance. Additionally, you will design and develop middleware, including custom middleware, to handle requests, responses, and authentication flows. It is essential to implement API authentication and authorization mechanisms such as JWT and OAuth2. You will manage application configurations using appsettings.json and other environment-based configurations. Optimizing backend database queries for efficient data access and retrieval will be a crucial part of your role. Handling async/await, managing parallel API calls, and ensuring smooth async programming across services are also part of your responsibilities. Writing clean, maintainable, and testable code, including unit test cases using standard frameworks, is expected. You will participate in CI/CD pipeline configuration and deployment to higher environments. Integration and configuration of Azure Application Insights for logging and monitoring purposes will be required. Collaborating with team members on microservices architecture and containerized deployments is essential. Troubleshooting, debugging, and solving complex technical problems will also be part of your duties. In terms of required skills, you should be proficient in .NET Core / ASP.NET Core, with a strong understanding of middleware, dependency injection, and app configuration. Experience in async programming, threading, and managing parallel jobs is necessary. Knowledge of HTTP status codes, API standards, and performance optimization techniques is also vital. For the frontend aspect, expertise in Angular (v8+), along with a strong knowledge of RxJS, Route Guards, Services, Components, and Pipes, is required. You should have the ability to pass data between components and call REST APIs, as well as experience in implementing pagination without UI libraries. Regarding testing and DevOps, experience in writing unit tests (e.g., NUnit, Jasmine, Karma) and understanding of CI/CD processes and tools (e.g., Azure DevOps, Jenkins) are essential. Deployment to higher environments is also part of the role. Familiarity with cloud platforms, preferably Azure, and hands-on experience with Azure Application Insights and cloud service integrations are expected. Nice to have skills include experience working in a microservices architecture, familiarity with database optimization techniques (SQL/NoSQL), experience with Docker/Kubernetes, and an understanding of Agile/Scrum methodologies. During the interview, the focus areas will include real-life use cases of async handling and middleware, CI/CD and deployment experiences, toughest challenges faced and how they were handled, and the ability to optimize code and queries for performance. Understanding of application architecture will also be assessed.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are seeking a highly skilled AI/ML Engineer to join our team. As an AI/ML Engineer, you will be responsible for designing, implementing, and optimizing machine learning solutions, encompassing traditional models, deep learning architectures, and generative AI systems. Your role will involve collaborating with data engineers and cross-functional teams to create scalable, ethical, and high-performance AI/ML solutions that contribute to business growth. Your key responsibilities will include developing, implementing, and optimizing AI/ML models using both traditional machine learning and deep learning techniques. You will also design and deploy generative AI models for innovative business applications, in addition to working closely with data engineers to establish and maintain high-quality data pipelines and preprocessing workflows. Integrating responsible AI practices to ensure ethical, explainable, and unbiased model behavior will be a crucial aspect of your role. Furthermore, you will be expected to develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Your expertise will be essential in optimizing large language models (LLMs) for efficient inference, memory usage, and performance. Collaboration with product managers, data scientists, and engineering teams to seamlessly integrate AI/ML into core business processes will also be part of your responsibilities. Rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness are essential aspects of this role. To be successful in this position, you must possess a strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks is required. Proficiency in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker is also necessary. Experience in deploying generative AI solutions, understanding responsible AI concepts, solid experience with MLOps pipelines, and proficiency in optimizing transformer models or LLMs for production workloads are key qualifications for this role. Additionally, familiarity with cloud services (AWS, GCP, Azure), containerized deployments (Docker, Kubernetes), as well as excellent problem-solving and communication skills are essential. Ability to work collaboratively with cross-functional teams is also a crucial requirement. Preferred qualifications include experience with data versioning tools like DVC or LakeFS, exposure to vector databases and retrieval-augmented generation (RAG) pipelines, knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs, familiarity with Agile workflows and sprint-based delivery, and contributions to open-source AI/ML projects or published papers in conferences/journals. Join our team at Lucent Innovation, an India-based IT solutions provider, and enjoy a work environment that promotes work-life balance. With a focus on employee well-being, we offer 5-day workweeks, flexible working hours, and a range of indoor/outdoor activities, employee trips, and celebratory events throughout the year. At Lucent Innovation, we value our employees" growth and success, providing in-house training, as well as quarterly and yearly rewards and appreciation. Perks: - 5-day workweeks - Flexible working hours - No hidden policies - Friendly working environment - In-house training - Quarterly and yearly rewards & appreciation,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You will be part of the software and product innovation team at PwC, focusing on creating advanced software solutions and driving product innovation to meet client needs. Your role involves combining technical expertise with creative thinking to deliver cutting-edge software products and solutions. Specifically, as a software engineer at PwC, your primary focus will be on developing innovative software solutions that contribute to digital transformation and enhance business performance. You will utilize your knowledge to design, code, and test state-of-the-art applications that revolutionize industries and provide exceptional user experiences. In the PwC Acceleration Centers (ACs), you will play a crucial role in actively supporting various services, including Advisory, Assurance, Tax, and Business Services. Working within our innovative hubs, you will engage in challenging projects and deliver distinctive services to enhance client engagements through quality and innovation. Furthermore, you will participate in dynamic training programs that are digitally enabled and designed to enhance both your technical and professional skills. Within the Cloud Engineering Data & AI team, you will be responsible for leading and supervising complex Java programs, ensuring alignment with business strategy and delivering measurable outcomes. As a Manager, your role will involve motivating, developing, and inspiring team members to deliver quality results, managing client accounts, and driving project success. **Responsibilities:** - Oversee the development of Java EJB applications - Mentor junior staff to enhance their technical capabilities - Collaborate with cross-functional teams to define project goals - Maintain adherence to coding standards and quality assurance - Analyze application performance and implement improvements - Foster a collaborative and innovative team environment - Uphold ethical standards in software development practices - Drive project timelines and ensure successful delivery **Requirements:** - Bachelor's Degree - 8 years of experience in Java, EJB, Microservices - Oral and written proficiency in English **Desirable Skills:** - Proven experience in developing multi-threaded applications - Hands-on experience with DevOps practices and tools - Familiarity with microservices architecture and technologies - Experience with containerized deployments - Knowledge of message parsing in banking messages - Exceptional communication and interpersonal skills *Additional educational requirements may apply.* **Minimum Years Experience Required:** Add here and change text color to black or remove bullet and section title if not applicable *Additional Application Instructions:* Add here and change text color to black or remove bullet and section title if not applicable,

Posted 3 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 327820 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DataRobot Consultant & Deployment Lead to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Duties: Overview U.S. Bank is seeking an experienced and visionary AI Architect to lead the design, deployment, and optimization of DataRobot's AI platform within our Azure cloud environment. This role requires deep technical expertise in AI/ML systems, containerized deployments, and cloud infrastructure, as well as the ability to guide cross-functional teams and influence strategic decisions. The ideal candidate will serve as a Subject Matter Expert (SME) and technical leader, ensuring scalable, secure, and high-performing AI solutions across the enterprise. Key Responsibilities: Architect and oversee the end-to-end deployment of DataRobot on Azure, including containerized environments using AKS. Lead the onboarding and migration of legacy SAS-based and DataRobot Prime v6 Python models into DataRobot v8.0.21, and plan the upgrade path to v11. Collaborate with data scientists, engineers, and business stakeholders to design and implement robust model monitoring and governance frameworks. Serve as the primary escalation point for complex technical issues related to DataRobot configuration, performance, and integration. Define and enforce best practices for model lifecycle management, including onboarding, execution with historical data, and metric validation. Align DataRobot outputs with existing Power BI dashboards and develop equivalent visualizations within the platform. Guide the configuration of Azure services required for DataRobot, including networking, storage, identity, and security. Troubleshoot AKS container setup and configuration issues, and propose scalable, secure solutions. Lead the Shield process and manage Jira stories across PreDev, PostDev, Prod, and RTx environments. Mentor and train internal teams on DataRobot capabilities, deployment strategies, and AI/ML best practices. Maintain comprehensive technical documentation and contribute to the development of enterprise AI standards. ??????Required Skills & Qualifications: 7+ years of experience in AI/ML architecture, cloud engineering, or related technical leadership roles. Deep expertise in DataRobot platform deployment, configuration, and monitoring. Strong proficiency with Azure services, especially AKS, Azure Storage, Azure Networking, and Azure DevOps. Proven experience with containerized deployments and Kubernetes troubleshooting. Solid understanding of model performance metrics, monitoring strategies, and visualization tools like Power BI. Proficiency in Python and familiarity with SAS model structures and LST reports. Strong leadership, communication, and cross-functional collaboration skills. Experience working in regulated environments and navigating enterprise governance processes. Preferred Qualifications: Experience in financial services or banking environments. Familiarity with U.S. Bank's Shield process or similar enterprise deployment frameworks. Background in MLOps, CI/CD pipelines for ML, and AI governance. Minimum Skills Required: Overview U.S. Bank is seeking an experienced and visionary AI Architect to lead the design, deployment, and optimization of DataRobot's AI platform within our Azure cloud environment. This role requires deep technical expertise in AI/ML systems, containerized deployments, and cloud infrastructure, as well as the ability to guide cross-functional teams and influence strategic decisions. The ideal candidate will serve as a Subject Matter Expert (SME) and technical leader, ensuring scalable, secure, and high-performing AI solutions across the enterprise. Key Responsibilities: Architect and oversee the end-to-end deployment of DataRobot on Azure, including containerized environments using AKS. Lead the onboarding and migration of legacy SAS-based and DataRobot Prime v6 Python models into DataRobot v8.0.21, and plan the upgrade path to v11. Collaborate with data scientists, engineers, and business stakeholders to design and implement robust model monitoring and governance frameworks. Serve as the primary escalatio About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

10.0 - 17.0 years

30 - 45 Lacs

Pune, Chennai, Bengaluru

Hybrid

Seeking a Technical Architect to lead Mirakl Marketplace integrations. Define scalable architecture, drive middleware strategy with Java SDKs, and align integration flows across ERP, PIM, and OMS systems.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies