Jobs
Interviews

15819 Containerization Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: Big Data Nifi Developer About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title: Big Data Nifi Developer Location: Pune (Hybrid) Experience: 3 to 5 Years Work Mode: Hybrid (2-3 days from client office, rest remote) Job Description We are seeking a highly skilled and motivated Big Data NiFi Developer to join our growing data engineering team in Pune. The ideal candidate will have hands-on experience with Apache NiFi, strong understanding of big data technologies, and a background in data warehousing or ETL processes. If you are passionate about working with high-volume data pipelines and building scalable data integration solutions, we’d love to hear from you. Key Responsibilities Design, develop, and maintain data flow pipelines using Apache NiFi. Integrate and process large volumes of data from diverse sources using Spark and NiFi workflows. Collaborate with data engineers and analysts to transform business requirements into data solutions. Write reusable, testable, and efficient code in Python or Java or Scala. Develop and optimize ETL/ELT pipelines for performance and scalability. Ensure data quality, consistency, and integrity across systems. Participate in code reviews, unit testing, and documentation. Monitor and troubleshoot production data workflows and resolve issues proactively. Skills & Qualifications 3 to 5 years of hands-on experience in Big Data development. Strong experience with Apache NiFi for data ingestion and transformation. Proficient in at least one programming language: Python, Scala, or Java. Experience with Apache Spark for distributed data processing. Solid understanding of Data Warehousing concepts and ETL tools/processes. Experience working with large datasets, batch and streaming data processing. Knowledge of Hadoop ecosystem and cloud platforms (AWS, Azure, or GCP) is a plus. Excellent problem-solving and communication skills. Ability to work independently in a hybrid work environment. Nice To Have Experience with NiFi registry and version control integration. Familiarity with containerization tools (Docker/Kubernetes). Exposure to real-time data streaming tools like Kafka.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Delhi, India

On-site

About NomiSo India : Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management. Our mission is to Empower and Enhance the lives of our customers through simple solutions for their complex business problems. At NomiSo, we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace thrives on ideas and opportunities. That is a part of our DNA. We’re in pursuit of colleagues who share similar passions, are nimble, and thrive when challenged. We offer a positive, stimulating, and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged. We invite you to push your boundaries and join us in fulfilling your career aspirations! What You Can Expect from Us: We work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the centre of everything we do at all levels of the company. Let’s make your career great! Position Overview: We are looking for hands-on L3 Support for the challenging and fun filled work of building a workflow automation system for simplifying current manual work. Roles and Responsibilities: Install, configure, and maintain Openshift environments Ensure high availability and reliability of the Openshift platform Monitor and optimize cluster performance Hands on Exp on ODF (Ceph storage ) Implement security best practices within the cluster Troubleshoot and resolve issues within the Openshift environment Collaborate with development teams for seamless application deployment Document procedures and provide training to team members Conduct regular backups and disaster recovery operations Must Have Skills: 8-12+ years of experience in administrating Kubernetes or Openshift environments. Strong understanding of containerization technologies Experience with CI/CD tools and practices Knowledge of networking and security within containerized environments Excellent troubleshooting and problem-solving skills Strong written and verbal communication skills Core Tools & Technology Stack Red Hat OpenShift , ODF Ceph storage , Loki stack, Container ,Docker,CI/CD pipelines,Ansible,Linux administration,Git,Prometheus/Grafana,sdn, OVN kubernets ,Networking,Shell scripting. Qualification: ● BE/B.Tech or equivalent degree in Computer Science or related field. Location: Delhi- NCR

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Udaipur, Rajasthan

On-site

Location: Udaipur (Full-Time | In-Office) Experience: 2–4 Years (Preferred) Type: Full-Time, Permanent Role Overview: We are looking for a skilled and motivated DevOps Developer/Engineer to join our team in Udaipur. The ideal candidate will be responsible for automating infrastructure, deploying applications, monitoring systems, and improving development and operational processes across the organization. Key Responsibilities: Design, implement, and manage CI/CD pipelines by using tools such as Jenkins, GitHub Actions, or GitLab CI. Deploy, manage & automate infrastructure provisioning using tools like Terraform, Ansible, or similar. Deploy and monitor cloud infrastructure (AWS, Azure, or GCP) Work with containerization tools like Docker and Kubernetes Collaborate with cross functional teams to ensure smooth code releases and deployments. Monitor application performance, troubleshoot issues, and improve deployment processes. Implement security and backup best practices across the infrastructure. Stay up to date with the latest DevOps trends, tools, and best practices. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in DevOps. Proficiency in cloud platforms like AWS, Azure, or GCP. Strong scripting skills (Bash, Python, or similar). Good understanding of system/network administration and monitoring tools. Experience working in Agile/Scrum environments. Familiarity with microservices architecture. Knowledge of security and compliance standards in cloud environments. If you’re ready to take the next step in your DevOps career, apply now. Job Types: Full-time, Permanent Benefits: Flexible schedule Paid time off Ability to commute/relocate: Udaipur City, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Experience: DevOps: 2 years (Preferred) Location: Udaipur City, Rajasthan (Preferred) Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

This job is with Morningstar, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. The Role In this role, you will collaborate with various leads, Scrum Master, Business Analysts, QA, and other developers to build technology solutions for Morningstar's data services offering. You will need to develop a good understanding of the existing systems and the data model. The team is looking for forward-thinking problem solvers who thrive in a fast-paced environment and can learn new technologies quickly as needed. Responsibilities Design, develop, and maintain the software code base. Help implement software solutions that meet requirements and quality needs. Reinforce good development practices like test driven development, continuous integration, innovative frameworks and technology solutions that help business move faster. Develop areas of continuous and automated deployment. Requirements Completed Bachelor's degree in Engineering. 8+ years of experience developing software solutions. 2-3 years working on data analysis projects. Excellent listening, written and verbal communication skills. Hands on experience with developing web services using C#/.NET, Web API, Dot Net Core. Highly proficient in RESTful API design. Experienced with web technologies and standards (e.g., JSON, JWT). Hands on experience with Relational Databases (SQL, MSSQL, Postgres). Hands on experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP). Hands on experience with CI/CD deployment, tools like Jenkins, Harness. Technologies Microsoft Stack : .NET/ASP.NET development, .NET Core, C#, SQL Server Java Stack : Java/J2EE, Spring (MVC, Spring Boot etc.), JPA/Hibernate, JavaScript, CSS, jQuery Good to Have: Exposure to Docker/Containerization. Knowledge of messaging/streaming technologies, like Kafka. Morningstar is an equal opportunity employer. Morningstar's hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We've found that we're at our best when we're purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you'll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

This job is with Morningstar, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Our Team Technology drives our business. Our team is made up of talented software engineers, infrastructure engineers, leaders and UX professionals. We care about technology as a craft and a differentiator. We bring our global products to market with a mix of software, cloud, data centers, infrastructure, design and grit. The Role At Morningstar, helping investors is what brings us together and drives our work. We are seeking an experienced and motivated Senior Software Developer with strong Java expertise and comprehensive AWS cloud services knowledge. In this role, you will design, develop, and implement high-quality software solutions that drive our business forward. As a key technical leader on our team, you will collaborate with cross-functional teams to deliver scalable, secure, and high-performance applications while mentoring junior developers and ensuring best practices are followed throughout the development lifecycle. You'll interact daily with our product managers to understand our domain and create technical solutions that push us forward. We want to work with other engineers who bring knowledge and excitement about our opportunities. This position is based in our Mumbai office. Key Responsibilities Architect, design, and implement robust Java-based applications, microservices, and backend systems using modern development practices and frameworks for example JPA, Hibernate, Springboot etc. Develop and integrate RESTful APIs and services to support various business functionalities, ensuring seamless communication between distributed systems. Continuously improve the application's performance, reliability, and security by applying sound engineering principles and industry best practices. Leverage AWS services (such as EC2, Lambda, S3, RDS, DynamoDB, CloudFormation, and more) to build and deploy scalable, cost-effective solutions. Collaborate with DevOps teams to integrate CI/CD pipelines, automate deployment processes, and monitor application health using AWS CloudWatch, X-Ray, and other monitoring tools. Design and implement cloud-native architectures ensuring high availability, fault tolerance, and security across applications. Lead code reviews, ensuring adherence to coding standards, design patterns, and best practices while fostering a culture of continuous improvement. Proactively identify opportunities to streamline development processes and drive architectural improvements. Work closely with Product Managers, QA Engineers, and UX/UI Designers to gather requirements, define technical specifications, and ensure successful project delivery. Participate in Agile/Scrum ceremonies, including sprint planning, stand-ups, and retrospectives, contributing innovative ideas to enhance team productivity. Stay up-to-date with emerging technologies, tools, and best practices in Java development and AWS cloud computing. Champion a culture of innovation by exploring new approaches and continuously evolving our technology stack to maintain a competitive edge. Contribute to internal knowledge sharing through documentation, technical talks, or workshops. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of proven experience in Java software development, with strong knowledge of object-oriented programming and design patterns. Solid hands-on experience with AWS cloud services, including designing and deploying production-level applications. Proficient in developing and consuming RESTful APIs and working with microservices architectures. Experience with build tools (e.g., Maven, Gradle), version control systems (e.g., Git), and Agile methodologies. Strong problem-solving skills with the ability to analyze complex systems and troubleshoot technical issues effectively. Excellent communication and interpersonal skills, with a demonstrated ability to work collaboratively in a team environment. Familiarity with containerization technologies (e.g., Docker, Kubernetes) and DevOps practices is a plus. Knowledge of additional programming languages ( eg. React, Vue.JS ) or frameworks is beneficial. Morningstar is an equal opportunity employer. If you receive and accept an offer from us, we require that personal and any related investments be disclosed confidentiality to our Compliance team (days vary by region). These investments will be reviewed to ensure they meet Code of Ethics requirements. If any conflicts of interest are identified, then you will be required to liquidate those holdings immediately. In addition, dependent on your department and location of work certain employee accounts must be held with an approved broker (for example all, U.S. employee accounts). If this applies and your account(s) are not with an approved broker, you will be required to move your holdings to an approved broker. Morningstar's hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. While some positions are available as fully remote, we've found that we're at our best when we're purposely together on a regular basis, typically three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you'll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🧾 Job Title : Chief Technology Officer (CTO) Location : Hyderabad, India Company : Innomax IT Solutions Pvt Ltd Experience : 12+ years in technology leadership, with deep expertise in HealthTech, FinTech, and TravelTech domains Reports To : Managing Director / Founder’s Office 🧠 About Innomax IT Solutions Innomax IT Solutions Pvt Ltd is a globally recognized technology and product innovation company specializing in next-generation solutions for Healthcare , Banking & Financial Services , and Travel & Tourism industries. Our mission is to build scalable, intelligent, and secure digital systems that power businesses across the world. 🚀 Role Overview As the Chief Technology Officer (CTO) , you will lead the overall technology vision, product architecture, and engineering delivery across our HealthTech, FinTech, and TravelTech verticals. This is a strategic role requiring a hands-on leader who can drive innovation, build world-class teams, and ensure scalable, secure, and compliant technology solutions. 🎯 Key Responsibilities 🏥 HealthTech Leadership Oversee the architecture and development of Hospital Information Management Systems (HIMS), Telemedicine platforms, Wellness & Preventive Care apps, and Patient Data Security modules. Ensure regulatory compliance (HIPAA, NABH, GDPR) and interoperability standards (FHIR, HL7). 💰 FinTech Leadership Lead design and deployment of secure platforms for mutual fund distribution, wealth management, and embedded finance. Drive integrations with payment gateways, KYC/AML frameworks, and regulatory APIs (SEBI, BSE, NPCI). ✈️ TravelTech Leadership Guide development of travel booking engines, hotel and flight APIs, itinerary management systems, and personalized travel AI bots. Collaborate with partners like Amadeus, Galileo, and IRCTC for seamless tech integration. 🧠 Strategic Tech Leadership Define and implement the company’s technical vision and product roadmap. Evaluate emerging technologies and recommend adoption to maintain competitive edge. Represent technology in Board-level and strategic investor discussions. 🛠️ Engineering & Product Oversight Establish engineering best practices, Agile/DevOps culture, and robust QA systems. Lead cross-functional product and tech teams from design to delivery. Drive adoption of cloud-native architecture (AWS/Azure/GCP), microservices, and containerization (Docker/Kubernetes). 🔐 Security & Scalability Implement robust cybersecurity policies, including data encryption, penetration testing, and disaster recovery. Ensure all platforms are scalable, modular, and high-performance under enterprise loads. ✅ Qualifications B.Tech/M.Tech in Computer Science or related field; MBA or executive program from top-tier B-school is a plus. Minimum 12 years of progressive technology leadership experience. Strong background in product development, system architecture, and scaling tech operations. Deep domain knowledge in at least two of the following: HealthTech, FinTech, TravelTech . Proven ability to lead teams of 50+ engineers across geographies. Hands-on experience with modern tech stack: MERN/MEAN, Python, Java, Flutter, React Native, Node.js, PostgreSQL, etc. 💡 Preferred Traits Visionary thinker with entrepreneurial mindset. Strong communicator with ability to simplify complex tech to business stakeholders. Prior experience in startup or high-growth digital product environments. Exposure to working with US/EU/APAC clients. 🌟 What We Offer Strategic leadership position with equity options Direct impact on multi-sectoral innovations Dynamic, high-performance, and intellectually vibrant environment Opportunities to represent the company in global tech forums send resume to hr@innomaxsol.com or whatsapp to 9281111716

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Python developer Experience: 7-9 years Location: Hyderabad Mode: 3 days from office- Hybrid Contract Role Must be from health care industry - mandatory Required Skills Python expertise: Strong grasp of idiomatic Python, async patterns, type annotations, unit testing, and modern libraries. API development: Experience building and scaling RESTful and/or GraphQL APIs in production. GraphQL proficiency: Familiarity with frameworks like Strawberry, Graphene, or similar. Containerization: Hands-on experience with Docker and container-based development workflows. GitHub Actions CI/CD: Working knowledge of GitHub Actions for automating tests and deployments. Team collaboration: Effective communicator with a proactive, self-directed work style. Preferred Qualifications Kubernetes: Experience deploying or troubleshooting applications in Kubernetes environments. AWS: Familiarity with AWS services such as ECS, EKS, S3, RDS, or Lambda. Healthcare: Background in the healthcare industry or building patient-facing applications. Monitoring and security: Familiarity with observability tools (e.g., Datadog, Prometheus) and secure coding practices If you are interested, please share your updated resume to pramod@intellistaff.in

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 week ago

Apply

2.0 years

0 Lacs

Greater Chennai Area

Remote

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team Workday’s Planning Cloud Engineering team is looking for a Senior Associate Cloud Engineer with experience in public cloud (AWS, GCP or Azure). In this role you will play an active part in designing and building the infrastructure, tools, and services delivering Workday Adaptive Planning next generation cloud platform. You will be challenged with everything from infrastructure tooling, automation, build and deployment pipelines, monitoring and logging architecture, containerization and more, all within an open, collaborative peer environment! About The Role Some of the day to day responsibilities you can expect to have include: Support Workday Planning Cloud infrastructure, working with technologies like Docker, Kubernetes, AWS, Chef and Terraform Participate in infrastructure automation leveraging Terraform, Chef, Jenkins and Golang Build automated solutions to reduce manual intervention in day to day tasks Participate in Planning and executing complicated technical projects that interact with a wide variety of teams within the company Build and respond to production monitors: triage, troubleshoot and resolution, Perform Root cause analysis Support the deployment of cloud solution software during and off regular office hours Support for Linux systems Participate in on-call monitoring response About You Are you a hardworking, creative and driven team member who can support us in our mission to gracefully support our Multi-Cloud infrastructure and Automation? If yes, we would love to hear from you! If you like trying new techniques and approaches to sophisticated problems, love to learn new technologies, are a natural collaborator and a phenomenal teammate who brings out the best in everyone around you, then give us a shout Basic Qualifications : 2+ years' DevOps, Systems/Infrastructure, or related Operations and SRE experience 1+ years of experience working directly with AWS Infrastructure services; understanding of AWS services and security required. 1+ years' experience with at least one programming language like: Go, Python, Bash, Perl Authoring configuration management scripts and deployment tools: Jenkins, Puppet, Chef or equivalent Other Qualifications: 1+ years of experience in the following: Cloud databases like AWS oracle/PostgreSQL RDS or Aurora PostgreSQL or GCP Cloud SQL are helpful Orchestration tools like Kubernetes and Working knowledge of containerization (Docker) Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are looking for a dynamic and innovative AI/ML Engineer with expertise in Generative AI and hands-on experience in GCP or other cloud platforms. The ideal candidate will have proven experience in developing, training, fine-tuning, and deploying advanced AI/ML models. You will play a pivotal role in building scalable, production-ready solutions involving large datasets , NLP techniques, and cutting-edge frameworks such as MCP, Retrieval-Augmented Generation (RAG), and REACT (Retrieve, Extract, Adapt, Construct, Think). This role requires a solid foundation in Python , SQL , and AI/ML development pipelines, combined with a passion for solving real-world problems using AI. Roles & Responsibilities Model Development & Training Design, train, and fine-tune AI/ML models , especially Generative AI and Large Language Models (LLMs) , to address specific use cases. Build conversational AI solutions and chatbots using frameworks such as LangChain , RAG (Retrieval-Augmented Generation) , and Chain-of-Thought (COT) prompting . Apply advanced techniques, including embeddings , fine-tuning, and custom prompting strategies. Incorporate REACT (Retrieve, Extract, Adapt, Construct, Think) methods to enhance model capabilities. Develop scalable AI solutions to integrate seamlessly into production environments. Data Handling Manage large-scale datasets for AI/ML applications, ensuring data quality, transformation, and normalization. Conduct data analysis, preprocessing, and munging to extract valuable insights. Implement scalable data engineering workflows for model development and production. Cloud AI/ML Deployment Deploy, manage, and optimize AI models on Google Cloud Platform (GCP) (or other cloud platforms like AWS/Azure). Leverage GCP services such as Vertex AI, BigQuery, Cloud Functions, and Dataflow for AI workflows. Collaboration & Solutioning Collaborate with cross-functional teams, including product managers, data scientists, and software engineers, to deliver AI-driven solutions. Integrate models with client-facing applications, ensuring end-to-end implementation. Support scalable development through Docker for containerization (a plus). Continuous Improvement Stay updated with the latest advancements in AI, ML, and Generative AI frameworks, tools, and methodologies. Proactively learn new technologies and apply them to improve processes and solutions. Web Scraping (Optional but Preferred) Implement web scraping solutions to gather data from unstructured sources for model training and validation. Required Skills & Qualifications 3-6 years of experience in AI/ML model development, training, and fine-tuning. Strong programming skills in Python and SQL . Hands-on experience with Generative AI , LLMs , and NLP techniques . Experience working with LangChain , RAG frameworks , and advanced prompting strategies. Proficiency in Chain-of-Thought ( CoT ) prompting, Model Context Protocol (MCP) , and Agent-to-Agent ( A2A ) orchestration Proficiency in embeddings and fine-tuning models for specific tasks. Strong understanding of machine learning algorithms and statistical analysis. Experience working with large datasets and scalable data processing workflows. Hands-on experience with GCP (Vertex AI, BigQuery) or other cloud platforms (AWS/Azure). Knowledge of Docker for deployment and containerization. Solid skills in data cleaning, transformation, and normalization for data integrity. Preferred Skills Familiarity with Reinforcement Learning from Human Feedback (RLHF) . Understanding of COT (Chain-of-Thought) prompting . Proficiency in REACT (Retrieve, Extract, Adapt, Construct, Think) frameworks. Experience in web scraping techniques. Key Attributes Strong analytical and problem-solving abilities. Ability to work independently as well as collaboratively in a team environment. Excellent communication skills to interact with stakeholders and cross-functional teams. Proactive attitude to learn and adopt new AI technologies and frameworks.

Posted 1 week ago

Apply

14.0 years

20 - 50 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: HPC Admin / Cloud Engineer Job ID: 26821 Location: Chennai, India (Onsite) Experience Required: 7–14 Years Salary Range: ₹20 – ₹50 LPA Work Type: Full-Time Notice Period: Immediate to 60 Days Role Summary We are seeking an experienced HPC Admin / Cloud Engineer to lead the design, implementation, and support of high-performance computing (HPC) clusters. This role requires in-depth knowledge of Linux systems, cluster management, storage, networking, and automation tools. You will be part of a technical team driving innovation and performance at scale. Key Responsibilities Design, deploy, and support high-performance compute (HPC) clusters Work with CPU/GPU architectures, scalable storage, high-speed interconnects, and cloud-based compute systems Create hardware BOMs for HPC clusters, manage vendor relationships, and oversee hardware release processes Configure and manage Linux-based systems (e.g., SuSE, RedHat, Rocky, Ubuntu) for HPC environments Ensure alignment of system design with performance and functional specifications Support new product releases to manufacturing and end-users, including golden images, scripts, documentation, and training Troubleshoot network and system-level issues and optimize cluster performance Must-Have Qualifications Minimum 7 years of experience in HPC systems, cluster configuration, and Linux system administration Strong knowledge of: Linux systems (SuSE, RedHat, Rocky, Ubuntu) HPC hardware (servers, GPUs, networking, storage, BIOS, BMC) TCP/IP fundamentals and network protocols (DNS, DHCP, HTTP, LDAP, SMTP) Scripting with Shell and Python Experience with configuration management tools like Salt, Chef, or Puppet Degree Requirement: BE/BTech, MSc, MCA, or MS in Computer Engineering, Electrical Engineering, or related disciplines Candidates with only Diploma or 3-year degrees (BSc/BCA) will not be considered Preferred Qualifications Exposure to DevOps practices (CI/CD pipelines, Git, Jenkins) Containerization experience (Docker, Singularity) Familiarity with Kubernetes, Prometheus, Grafana Experience with reverse proxies/load balancers (Apache, NGINX, HA Proxy) Proven ability to create and support scalable infrastructure in a production setting Skills: networking,linux,kubernetes,tcp/ip fundamentals and network protocols (dns, dhcp, http, ldap, smtp),reverse proxies/load balancers (apache, nginx, ha proxy),scripting with shell and python,linux systems (suse, redhat, rocky, ubuntu),prometheus,cloud,hpc hardware (servers, gpus, networking, storage, bios, bmc),containerization (docker, singularity),management,devops practices (ci/cd pipelines, git, jenkins),configuration management tools (salt, chef, puppet),design,grafana

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us : CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values : We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement : CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Role: Lead AI Engineer Location: Noida, Delhi/NCR(Hybrid) Experience: 5-10years Role Overview: As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership. Key Responsibilities: Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents. Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability. Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization. Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems. End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms. Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development. Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture. Required Skills and Qualifications Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field. 6+ years of professional experience in a Data Scientist, AI Engineer, or related role. Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn). Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions. Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS . Experience with GCS is a plus. Strong background in Natural Language Processing (NLP) , including experience with multilingual models and transcription. Experience with containerization technologies, specifically Docker . Solid understanding of software engineering principles and experience building APIs and microservices. Preferred Qualifications A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus. Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch). Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins). Proven ability to lead technical teams and mentor other engineers. Experience developing custom tools or packages for data science workflows.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bhilai, Chhattisgarh, India

On-site

Job Summary: We are seeking an experienced DevOps Engineer with a strong background in deploying and managing AI applications on Azure . The ideal candidate should have experience in deploying AI systems, understands AI Agentic architectures, and can optimize and manage LLM-based applications in production environments. Key Responsibilities: Deploy, scale, and monitor AI applications on Microsoft Azure (AKS, Azure Functions, App Services, etc.). Build and optimize AI Agentic systems for robust and efficient performance. Implement CI/CD pipelines for seamless updates and deployments. Manage containerized services using Docker/Kubernetes. Monitor infrastructure cost, performance, and uptime. Collaborate with AI engineers to understand application requirements and support smooth deployment. Ensure compliance with data security and privacy standards. Requirements: 2+ years of experience in deploying and managing AI/ML applications. Proficiency in Azure cloud services and DevOps practices. Familiarity with LLM-based systems, LangChain, Vector DBs, and Python. Experience with containerization tools (Docker) and orchestration (Kubernetes). Understanding of AI system architecture, including Agentic workflows. Strong problem-solving and optimization skills. Preferred Qualifications: Knowledge and experience with Microsoft Azure Experience with Gemini, OpenAI, Anthropic, or Hugging Face APIs. Familiarity with LangChain, LlamaIndex, or ChromaDB. Prior experience in managing high-availability, secure, and cost-optimized AI deployments.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Role We are looking for an experienced Java Backend Developer who is passionate about building scalable backend systems and writing clean, efficient code. You will play a key role in the development and maintenance of our core backend services, APIs, and system architecture. Key Responsibilities Design, develop, and maintain scalable and high-performance backend services using Java Build RESTful APIs and integrate third-party services and APIs. Optimize application performance, scalability, and reliability. Work closely with front-end developers, product managers, QA, and DevOps teams to deliver end-to-end solutions. Participate in code reviews, unit testing, and documentation. Troubleshoot and resolve technical issues and bugs. Contribute to continuous improvement in development processes and product quality. Requirements 4+ years of hands-on experience in backend development using Java . Strong understanding of Spring Boot , Spring MVC , and related frameworks. Experience with RESTful API development and microservices architecture . Solid understanding of object-oriented programming and design patterns . Experience with SQL and relational databases (e.g., MySQL, PostgreSQL). Familiarity with NoSQL databases (e.g., MongoDB, Redis) is a plus. Hands-on experience with version control systems like Git. Good understanding of CI/CD pipelines and containerization tools (Docker/Kubernetes) is an advantage. Strong problem-solving and debugging skills. Ability to write clean, maintainable, and well-documented code. Good to Have Exposure to cloud platforms like AWS, Azure, or GCP. Knowledge of messaging systems like Kafka or RabbitMQ. Familiarity with Agile/Scrum methodologies. Experience with Python will be great. To apply, please send your resume to sooraj@superpe.in SuperPe is an equal opportunity employer and welcomes candidates of all backgrounds to apply. We look forward to hearing from you!

Posted 1 week ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Urgent Hiring!!! Location : Remote Role : Staff Engineer Experience : 8+ Responsibilities Collaborate with the Engineering Group and Product team to understand requirements and design comprehensive solutions. Optimize applications for maximum speed, scalability, and security. Implement security and data protection measures. Build high-quality, reusable code for both frontend and backend applications. Document and communicate application design, topologies, and architecture clearly to peers and the business. Work closely with User Experience, Product Management, Engineering, and Marketing teams to create outstanding web experiences. Partner with Engineering and other teams to develop new frameworks, feature sets, and functionalities. Lead and coach team members, promoting thought leadership and project excellence. Provide technical leadership, ensuring adherence to best software engineering practices, such as TDD, continuous integration, delivery, and deployment. Must Have Experience Requirements Education and experience: ○ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. ○ Minimum of 8+ years of professional experience in full-stack development. Technical Requirements: ○ Proficiency in JavaScript, including ES6 and beyond, asynchronous programming, closures, and prototypal inheritance. ○ Expertise in modern front-end frameworks/libraries (React, Vue.js). ○ Strong understanding of HTML5, CSS3, and pre-processing platforms like SASS or LESS. ○ Experience with responsive and adaptive design principles. ○ Knowledge of front-end build tools like Webpack, Babel, and npm/yarn. ○ Proficiency in Node.js and frameworks like Express.js, Koa, or NestJS. ○ Experience with RESTful API design and development. ○ Experience With Serverless.(Lambda, CloudFunctions) ○ Experience with GraphQL. ○ Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis). ○ Experience with caching & search frameworks. (Redis, ElasticSearch) ○ Proficiency in database schema design and optimization. ○ Experience with containerization tools (Docker, Kubernetes). ○ Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). ○ Knowledge of cloud platforms (AWS, Azure, Google Cloud). ○ Proficiency in testing frameworks and libraries (Jest, vitest, Cypress, Storybook). ○ Strong debugging skills using tools like Chrome DevTools, Node.js debugger. ○ Expertise in using Git and platforms like GitHub, GitLab, or Bitbucket. ○ Understanding of web security best practices (OWASP). ○ Experience with authentication and authorization mechanisms (OAuth, JWT). ○ System Security, Scalability, System Performance experience Leadership & Team: ○ Proven experience in leading and mentoring a team of developers. ○ Proven track record of delivering complex projects successfully. ○ Ability to conduct code reviews and provide constructive feedback. ○ Experience in agile methodologies (Scrum, Kanban). ○ Ability to manage project timelines and deliverables effectively. ○ Excellent verbal and written communication skills. ○ Ability to explain technical concepts to non-technical stakeholders. ○ Strong analytical and problem-solving skills. ○ Ability to troubleshoot and resolve complex technical issues. ○ Experience in working with cross-functional teams (designers, product managers, QA). ○ Ability to quickly learn and adapt to new technologies and frameworks. Perks Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

Remote

This position is posted by Jobgether on behalf of Cambridge Mobile Telematics . We are currently looking for a Software Engineer in India . Join a global engineering team that's driving meaningful change in the mobility and safety space. In this role, you'll develop cutting-edge software that transforms complex data into impactful solutions for some of the world's leading insurers, automakers, and public agencies. You'll work on scalable platforms powered by AI, cloud services, and real-time analytics, collaborating with a passionate and diverse team. This is an opportunity to grow your skills, contribute to safer roads worldwide, and make a direct impact at scale. Accountabilities: Design and implement scalable, maintainable, and testable software solutions based on complex business needs Participate in the entire software development lifecycle including code reviews, deployment, and production support Translate business and functional requirements into clear, actionable technical tasks Collaborate with cross-functional teams to ensure system reliability, performance, and usability Mentor junior engineers and promote best practices in development, architecture, and testing Maintain and improve CI/CD pipelines, support containerization efforts, and ensure code meets operational standards Troubleshoot production issues and provide on-call support when needed Requirements Bachelor's degree in Computer Science or related field Minimum 2 years of professional software engineering experience Proficiency in Python with strong software design and architecture skills Experience building scalable web applications using Django Solid knowledge of SQL and database systems like PostgreSQL and Amazon Redshift Familiarity with AWS services such as EC2, S3, Lambda, SNS, SQS, and RDS Hands-on experience with CI/CD tools, especially Jenkins Working knowledge of Docker and container-based development (Bonus) Experience building RESTful APIs and working with a React frontend Strong analytical, communication, and problem-solving skills Ability to work collaboratively in a fast-paced, cross-functional team environment Benefits Competitive salary based on experience and skills Potential equity grants in the form of Restricted Stock Units (RSUs) Private healthcare and life insurance coverage Generous parental leave and flexible scheduling options Remote-friendly work policy, depending on role responsibilities Access to employee-led resource groups (e.g., LGBTQIA+, Women in Tech, Wellness) Extensive learning resources and professional development programs Mission-driven culture focused on improving road safety worldwide Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching . When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience and achievements. 📊 It compares your profile to the job's core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team. Thank you for your interest!

Posted 1 week ago

Apply

12.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Azure Professionals in the following areas : Azure + .NET Architect Position Overview The Azure + .NET Architect is responsible for designing, implementing, and overseeing enterprise-scale cloud solutions using Microsoft Azure and .NET technologies. This role combines deep technical expertise in both Azure cloud services and .NET development with strategic architecture responsibilities to drive digital transformation initiatives and modernize application portfolios. Key Responsibilities Solution Architecture & Design Design and architect scalable, secure, and resilient cloud-native applications using Azure services and .NET technologies Create comprehensive technical architecture blueprints that align with business requirements and enterprise standards Define and implement microservices architectures, containerization strategies, and serverless computing patterns Establish integration patterns between on-premises systems and Azure cloud services Design and implement CI/CD pipelines for automated deployment and DevOps practices Azure Cloud Strategy & Implementation Lead cloud migration strategies from on-premises to Azure, including lift-and-shift and re-architecting approaches Design hybrid cloud solutions that seamlessly integrate on-premises infrastructure with Azure services Implement Azure governance, security, and compliance frameworks across enterprise environments Optimize Azure resource utilization, performance, and cost management strategies Design disaster recovery and business continuity solutions using Azure services NET Application Architecture Architect modern .NET applications using .NET 6/7/8, ASP.NET Core, and related Microsoft technologies Design and implement APIs, web applications, and background services using .NET technologies Establish coding standards, design patterns, and best practices for .NET development teams Lead modernization efforts for legacy .NET Framework applications to .NET Core/5+ Implement security patterns, authentication, and authorization frameworks in .NET applications Technical Leadership & Governance Provide technical leadership and mentorship to development teams and junior architects Establish architectural governance processes and review boards for technical decision-making Conduct architecture reviews, code reviews, and technical assessments Required Qualifications Education & Experience Bachelor's degree in Computer Science, Information Technology, or related field. 12-15+ years of experience in .NET + Azure environment with Architect experience. Core Azure Services Deep knowledge of Azure compute services (Virtual Machines, App Services, Container Instances, AKS) Expertise with Azure data services (SQL Database, Cosmos DB, Storage Account, Data Factory) Proficiency in Azure networking (Virtual Networks, Load Balancers, Application Gateway, VPN) Experience with Azure security services (Key Vault, Active Directory, Security Center) Knowledge of Azure monitoring and management tools (Monitor, Log Analytics, Application Insights) Advanced Azure Capabilities Experience with Azure DevOps, CI/CD pipelines, and Infrastructure as Code (ARM, Bicep, Terraform) Knowledge of Azure Functions, Logic Apps, and serverless computing patterns Understanding of Azure integration services (Service Bus, Event Grid, API Management) Experience with Azure cost optimization and resource governance Application Development Experience building RESTful APIs, GraphQL services, and microservices architectures Knowledge of authentication and authorization patterns (OAuth, JWT, Identity Server) Proficiency in front-end integration with frameworks like Angular, React, or Blazor Experience with message queuing systems and event-driven architectures Understanding of Domain-Driven Design (DDD) and Clean Architecture principles DevOps & Development Practices Strong experience with Git version control and branching strategies Expertise in CI/CD pipeline design and implementation (Azure DevOps, GitHub Actions) Knowledge of containerization technologies (Docker, Kubernetes, Azure Container Registry) Experience with infrastructure as code and automated deployment strategies Understanding of monitoring, logging, and observability practices Architecture & Design Skills Proven experience in enterprise architecture patterns and principles Strong understanding of distributed systems, scalability, and performance optimization Knowledge of security architecture, compliance frameworks, and data protection Experience with API design, integration patterns, and service-oriented architectures Understanding of database design, data modelling, and data architecture At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

0.0 - 8.0 years

14 - 17 Lacs

Gurugram, Haryana

On-site

Job Title: Senior MERN Stack Developer Location: Gurugram (On-site) CTC: Up to ₹17 LPA Experience Required: 8+ Years Employment Type: Full-Time Notice Period: Immediate joiner Job Summary: We are looking for an experienced and highly skilled MERN Stack Developer to join our dynamic development team in Gurugram. The ideal candidate will be responsible for designing and implementing scalable web applications using modern technologies like React.js, Node.js, Express.js , and both SQL and NoSQL databases . Key Responsibilities: Design and develop high-performance web applications using the MERN stack. Build reusable components and front-end libraries using React.js . Develop RESTful APIs and backend services using Node.js and Express.js . Integrate and manage data across MongoDB, MySQL, and PostgreSQL databases. Optimize applications for maximum speed, scalability, and performance. Collaborate with UI/UX designers, product managers, and other developers. Participate in architecture discussions and code reviews. Ensure cross-platform compatibility and responsiveness of applications. Maintain code integrity and organization using version control tools like Git. Required Skill Set: Strong hands-on experience (8+ years) in MERN stack development . Expert-level proficiency in React.js, Node.js, Express.js . Solid understanding and working experience with MongoDB, MySQL , and PostgreSQL . Experience in writing clean, modular, and scalable code. Strong understanding of data structures, algorithms, and design patterns. Proficiency in REST API development and integration. Familiarity with deployment, CI/CD, and DevOps tools is a plus. Excellent problem-solving and analytical skills. Good to Have: Experience with cloud platforms like AWS, Azure, or GCP. Knowledge of containerization (Docker/Kubernetes). Understanding of performance testing frameworks (e.g., Mocha, Jest). Exposure to Agile/Scrum methodologies. Why Work With Us? Competitive compensation up to ₹17 LPA. Opportunity to work on cutting-edge technologies and challenging projects. A collaborative, innovative, and supportive team environment. Excellent career growth and learning opportunities. Job Types: Full-time, Permanent Pay: ₹1,400,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Morning shift Experience: MERN Stack Developer: 8 years (Required) Location: Gurugram, Haryana (Required) Work Location: In person

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are seeking an experienced Data Scientist (4-7 years) to build and optimize AI models that power our next-generation agentic AI systems. You will work on designing, deploying, and scaling AI-driven solutions that adapt, learn, and operate autonomously. The ideal candidate is adept at handling large-scale data, developing ML pipelines, and provisioning AI models for self-hosted and cloud-based environments. Responsibilities: Develop and optimize machine learning models for agentic AI applications, including autonomous decision-making, reasoning, and planning. Design and implement scalable ML pipelines for real-time and batch inference, supporting high-performance AI workloads. Work with MLOps and engineering teams to deploy self-hosted AI models efficiently, ensuring optimized inference and minimal latency. Utilize distributed training techniques (e. g., Horovod, DeepSpeed) to enhance model scalability across multi-GPU or multi-node environments. Implement model fine-tuning, prompt engineering, and continuous learning systems to improve AI adaptability. Conduct rigorous model evaluation, including accuracy, fairness, and performance benchmarking. Stay updated with advancements in LLMs, reinforcement learning, and deep learning techniques relevant to agentic AI. Requirements: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or a related field. 4-7 years of experience in applied machine learning, AI model development, or data science roles. Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, or JAX. Experience in developing, training, and deploying LLMs and transformer-based models. Strong knowledge of distributed computing frameworks for AI scalability (e. g., Kubernetes, Ray, Horovod, DeepSpeed). Hands-on experience in model optimization, quantization, and inference acceleration. Expertise in self-hosted AI model provisioning, cloud/on-premise AI infrastructure, and containerization (Docker, Kubernetes). Familiarity with reinforcement learning, autonomous systems, or decision intelligence is a plus. Preferred Skills: Experience with retrieval-augmented generation (RAG), fine-tuning, and knowledge distillation for AI models. Strong understanding of prompt engineering, embeddings, and vector search (FAISS, Pinecone, Weaviate). Background in working with large-scale datasets, including data pipelines, feature engineering, and model monitoring. Familiarity with AI observability and monitoring frameworks for model performance tracking.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

HCLTech is hiring ML Ops Engineer for Chennai location. Job Overview: We are looking for an experienced MLOps Engineer to help deploy, scale, and manage machine learning models in production environments. You will work closely with data scientists and engineering teams to automate the machine learning lifecycle, optimize model performance, and ensure smooth integration with data pipelines. Experience Required : 6 to 10 yrs Location: Chennai Notice Period: Immediate/ 30 days Key Responsibilities: Transform prototypes into production-grade models Assist in building and maintaining machine learning pipelines and infrastructure across cloud platforms such as AWS, Azure, and GCP. Develop REST APIs or FastAPI services for model serving, enabling real-time predictions and integration with other applications. Collaborate with data scientists to design and develop drift detection and accuracy measurements for live models deployed. Collaborate with data governance and technical teams to ensure compliance with engineering standards. Maintain models in production Collaborate with data scientists and engineers to deploy, monitor, update, and manage models in production. Manage the full CI/CD cycle for live models, including testing and deployment. Develop logging, alerting, and mitigation strategies for handling model errors and optimize performance. Troubleshoot and resolve issues related to ML model deployment and performance. Support both batch and real-time integrations for model inference, ensuring models are accessible through APIs or scheduled batch jobs, depending on use case. Contribute to AI platform and engineering practices Contribute to the development and maintenance of the AI infrastructure, ensuring the models are scalable, secure, and optimized for performance. Collaborate with the team to establish best practices for model deployment, version control, monitoring, and continuous integration/continuous deployment (CI/CD). Drive the adoption of modern AI/ML engineering practices and help enhance the team’s MLOps capabilities. Develop and maintain Flask or FastAPI-based microservices for serving models and managing model APIs. Minimum Required Skills: Bachelor's degree in computer science, analytics, mathematics, statistics. Strong experience in Python, SQL, Pyspark. Solid understanding and knowledge of containerization technologies (Docker, Podman, Kubernetes). Proficient in CI/CD pipelines, model monitoring, and MLOps platforms (e.g., AWS SageMaker, Azure ML, MLFlow). Proficiency in cloud platforms, specifically AWS, Azure and GCP. Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn. Familiarity with batch processing integration for large-scale data pipelines. Experience with serving models using FastAPI, Flask, or similar frameworks for real-time inference. Certifications in AWS, Azure or ML technologies are a plus. Experience with Databricks is highly valued. Strong problem-solving and analytical skills. Ability to work in a team-oriented, collaborative environment. Tools and Technologies: Model Development & Tracking: TensorFlow, PyTorch, scikit-learn, MLflow, Weights & Biases Model Packaging & Serving: Docker, Kubernetes, FastAPI, Flask, ONNX, TorchScript CI/CD & Pipelines: GitHub Actions, GitLab CI, Jenkins, ZenML, Kubeflow Pipelines, Metaflow Infrastructure & Orchestration: Terraform, Ansible, Apache Airflow, Prefect Cloud & Deployment: AWS, GCP, Azure, Serverless (Lambda, Cloud Functions) Monitoring & Logging: Prometheus, Grafana, ELK Stack, WhyLabs, Evidently AI, Arize Testing & Validation: Pytest, unittest, Pydantic, Great Expectations Feature Store & Data Handling: Feast, Tecton, Hopsworks, Pandas, Spark, Dask Message Brokers & Data Streams: Kafka, Redis Streams Vector DB & LLM Integrations (optional): Pinecone, FAISS, Weaviate, LangChain, LlamaIndex, PromptLayer Interested candidates, kindly share their resumes on paridhnya_dhawankar@hcltech.com with below details. Overall Experience: Current and Expected CTC: Current and Preferred Location: Notice Period:

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... Responsible for coding, unit testing, building high performance and scalable applications that meet the needs of millions of Walmart-International customers, in the areas of supply chain management & Customer experience. What you'll do... About Team: Our team collaborates with Walmart International, which has over 5,900 retail units operating outside of the United States under 55 banners in 26 countries including Africa, Argentina, Canada, Central America, Chile, China, India, Japan, and Mexico, to name a few. What you'll do: You are responsible for coding, unit testing, building high performance and scalable applications that meet the needs of millions of Walmart-International customers, in the areas of supply chain management ; Customer experience. You are expected to be more intellectually curious engineer who is passionate about domain/technology in general. What you'll bring: 3 to 6 years of total experience of which 3+ years in Backend engineering platform development. 3+ years of experience in Java technologies, Distributed systems and large-scale application development and design. Hands on experience Kafka, Cassandra. Experience with a containerization technology and Microservice Well versed in CI/CD Work with Java, Multithreading, Data Structures, Algorithm, Design Patterns and develop robust high- performance and scalable applications. Extremely strong technical background with the capability of being hands-on and ability to mentor top individual technical talent. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 2years’ experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 4 years’ experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, Network+, GISF, GSEC, CISSP, or CCSP, Master’s degree in Computer Science, Information Technology, Engineering, Information Systems, Cybersecurity, or related area Primary Location... Pardhanani Wilshire Ii, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2222575

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Gen Ai Architect Key Skills : Gen Ai Architect, Gateway Job Locations : Hyderabad Experience : 12+ Years. Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Key Responsibilities: Architectural Leadership Design and document scalable, reliable, and maintainable architectures for Gen AI applications. Ensure solutions meet production-grade standards and enterprise requirements. Technical Decision Making Evaluate trade-offs in technology choices, design patterns, and frameworks. Align decisions with Gen AI best practices and software engineering principles. Team Guidance Mentor and guide architects and engineers. Foster a collaborative, innovative, and high-performance development environment. Hands-On Development Actively contribute to prototyping and implementation using C and Python. Drive research and development of core AI Gateway components. Product Development Mindset Build a responsible and scalable AI Gateway considering: Cost efficiency Security and compliance Upgradeability Ease of use and integration Required Qualifications: Technical Expertise Extensive experience in API-based projects and full lifecycle deployment of Gen AI/LLM applications. Strong hands-on proficiency in C and practical experience with Python. Cloud & DevOps Expertise in Docker, Kubernetes, and OpenShift for containerization and orchestration. Working knowledge of: Azure AI Services: OpenAI, AI Search, Document Intelligence AWS Services: EKS, SageMaker, Bedrock Security & Access Management Familiarity with Okta for secure identity and access management. LLM & Gen AI Tools Experience with LangChain, LlamaIndex, and OpenAI SDKs in C. Monitoring & Troubleshooting Proven ability to monitor, trace, and debug complex distributed AI systems. Personal Attributes: Strong leadership and mentorship capabilities. Excellent communication skills for both technical and non-technical audiences. Problem-solving mindset with attention to detail. Passion for advancing AI technologies in production environments. Preferred Experience: Prior leadership in large-scale, production-grade AI initiatives. Experience in enterprise technology projects involving Gen AI.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

About FAI. At First American (India), we don’t just build software—we build the future of real estate technology. Our people-first culture empowers bold thinkers and passionate technologists to solve real-world challenges through scalable architecture and innovative design. If you're driven by impact, thrive in collaborative environments, and want to shape how world-class products are delivered—this is the place for you. Job Title: Manager – Software Development Role Summary Looking for an experienced Engineering leader with 15+ yrs. in experience who can work directly with Product, Research and Design teams to build complex software applications, ensuring engineering output meets the highest of quality standards and the team continues to thrive, grow, and continuously improve. What we have for you Opportunity to lead multiple engineering teams in cloud native technologies in designing & developing microservices to build title and escrow APIs for all divisions of First American. Responsibilities and Duties- As an Engineering Manager your roles and responsibilities include, Responsible for the quality & quantity of engineering delivery of squads with continuous, iterative improvement through better planning and execution. Work with closely with engineering and product leaders to provide thought and execution leadership towards strategic outcomes. Work closely with Product managers, Architects & Leads, to perform complex software process definition, requirements analysis, and high-level design/modeling to convert stakeholder needs into software solutions with thorough feasibility analysis (Technical, Financial, Operational) Attract, nurture, coach, and retain talent. Ensure every assigned engineer, lead, architect has a career progression plan through regular check-in points and real-time feedback. Contribute to creating an enhanced skill matrix to drive training, development, and career goals for engineers. Take a lead at defining & building the vision for our engineering organization & interact with other departments to organize support wherever necessary. Work towards identify a unified quality and standards framework for application development and support. Create a robust production support framework targeted at troubleshooting, conflict resolution and observability to address problems early and support the team on production & nonproduction application issues. Technology Stack - An ideal candidate should have understanding & hands-on experience with following technologies: We are open to candidates with strong experience across modern technology stacks. The ideal candidate will bring a mix of hands-on expertise and architectural insight across both legacy and emerging technologies. We are not limited to the .NET ecosystem — we are open to like Node.js, Python, React, JavaScript, Kafka, Docker, and Terraform. Proven experience leading, mentoring, and supporting agile development teams of 10–15 engineers. Hands-on experience in designing, developing, and maintaining enterprise-grade web applications across all phases of the SDLC using technologies such as C#, ASP.NET, MVC 5, Web API, .NET Core, Microservices, and SQL Server (2014/2016/2018). Exposure to or working knowledge of modern tech stacks including Node.js, Python, and React is highly preferred. Strong understanding of event-driven architecture and experience working with Apache Kafka or similar messaging systems. Familiarity with containerization and orchestration tools such as Docker and Kubernetes. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform for cloud provisioning and automation. Cloud expertise in AWS or Azure, with an understanding of key services, architectural best practices, and trade-offs. Solid foundation in object-oriented programming, design patterns, and SOLID principles. Strong understanding of secure development practices including vulnerability assessments, secure code reviews, SSL/Non-SSL implementations, and compliance frameworks. Ability to define and evolve software architecture by understanding requirements, constraints, and dependencies—while identifying opportunities to optimize performance and scalability. Experience in establishing and enforcing technical standards, architectural guidelines, and best practices across teams. Ability to communicate architecture and design decisions clearly to engineering teams and stakeholders.

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Job Title: Senior GCP Cloud Engineer/Lead Location: Remote Type: Full-Time with NTT Data India Services Job Description: A role that requires good understanding in cloud-based Infrastructure and application hosting, designing and development. Managing cloud environments considering the security aspects. Good problem-solving skills with the ability to see and solve issues quickly. Strong Interpersonal skills with clear and precise verbal and written communication. Skills Must have's 6+ years' relevant experience in designing and developing Google cloud infrastructure solutions (minimum 3+yrs active in GCP , recently). Cloud Provider: Good understanding and hands on for GCP with exposure to services like (at least 7): Cloud Identity & Identity and Access Management, Role Based Access Control RBAC, Compute Engine, Storage (Cloud Storage, Persistent Disks, Cloud storage-Nearline or Coldline, Cloud File store etc), VPC, Google Load Balancer, Cloud Interconnect, Google Domains, Cloud DNS, Cloud Content Delivery Network, Cloud pub-sub, Stackdriver etc. Aware and experience with Landing Zone Concept with google cloud services (with hub-spoke arch) SCM tools: Git, Bitbucket, GitLab etc. Basic understanding of CI environment for code using Jenkins / GitHub Actions/ GitLab / Azure Devops / Cloud Build-Cloud Deploy. (at least one) Hands on of Bash Shell Scripting. Experience and good Understanding of Terraform scripting. Certification: Associate Cloud Engineer – GCP Monitoring and Event Management (Cloud Monitoring, Grafana, Prometheus) Nice to have Containerization: Good Docker containerization skills and Kubernetes Orchestration (GKE) hands on. Experience with Anthos Experience on SQL and NoSQL Database. Experience on app migration to cloud would be addon

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies