Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
saharanpur, uttar pradesh, india
On-site
Jubilant Pharma Limited is a global integrated pharmaceutical company offering a wide range of products and services to its customers across geographies. We organise our business into two segments, namely, Specialty Pharmaceuticals, comprising Radiopharmaceuticals (including Radio pharmacies), Contract Manufacturing of Sterile Injectable, Non-sterile & Allergy Therapy Products, and Generics & APIs, comprising Solid Dosage Formulations & Active Pharmaceutical Ingredients. Jubilant Generics (JGL) is a wholly - owned subsidiary of Jubilant Pharma. JGL in India has Research & Development units at Noida and Mysore. It has two manufacturing facilities one at Mysore, Karnataka and another at Roorkee, Uttarakhand, engaged in APIs and Dosage manufacturing, respectively. The manufacturing location at Mysore is spread over 69 acres and it’s a USFDA approved site engaged in manufacturing of APIs, and caters to the sales worldwide. API portfolio focusses on Lifestyle driven Therapeutic Areas (CVS, CNS) and targets complex and newly approved molecules. The company is the market leader in four APIs and is amongst the top three players for another three APIs in its portfolio helping it maintain a high contribution margin. The manufacturing location at Roorkee, Uttarakhand is state of the art facility and is audited and approved by USFDA, Japan PMDA, UK MHRA, TGA, WHO and Brazil ANVISA. This business focusses on B2B model for EU, Canada and emerging markets. Both manufacturing units are backward- integrated and are supported by around 500 research and development professionals based at Noida and Mysore. R&D works on Development of new products in API, Solid Dosage Formulations of Oral Solid, Sterile Injectable, Semi-Solids Ointments, Creams and Liquids. All BA/BE studies are done In house at our 80 Bed facility which is inspected and having approvals /certifications from The Drugs Controller General (India) and has global regulatory accreditations including USFDA, EMEA, ANVISA (Brazil), INFRAMED (Portugal Authority), NPRA(Malaysia), AGES MEA (Austria) for GCP and NABL, CAP accreditations for Path lab services. JGL’s full-fledged Regulatory Affairs & IPR professionals ensures unique portfolio of patents and product filings in regulatory and non-regulatory market. Revenue of Jubilant Pharma is constantly increasing and during the Financial Year 2018 -19 it was INR 53,240 Million as compared to INR 39,950 Million during the Financial Year 2017-18. Kindly refer www.jubilantpharma.com for more information about organization. SOP Responsible for preparation of SOPs of IPQA Batch Manufacturing Documents Responsible for monitoring and ensuring on line review of batch manufacturing documents, batch packing documents Responsible for review of batch manufacturing Packaging and Analytical records, equipment logs etc. before batch release In process Controls Responsible for carrying out In Process control tests, line clearances activities, sampling during process validations, cleaning validation, batch manufacturing, packing GMP monitoring Responsible for monitoring adherence to various cGMP activities and laid down procedures in receipt, storage, testing, processing and dispatch of products
Posted 2 days ago
0 years
0 Lacs
saharanpur, uttar pradesh, india
On-site
Jubilant Pharma Limited is a global integrated pharmaceutical company offering a wide range of products and services to its customers across geographies. We organise our business into two segments, namely, Specialty Pharmaceuticals, comprising Radiopharmaceuticals (including Radio pharmacies), Contract Manufacturing of Sterile Injectable, Non-sterile & Allergy Therapy Products, and Generics & APIs, comprising Solid Dosage Formulations & Active Pharmaceutical Ingredients. Jubilant Generics (JGL) is a wholly - owned subsidiary of Jubilant Pharma. JGL in India has Research & Development units at Noida and Mysore. It has two manufacturing facilities one at Mysore, Karnataka and another at Roorkee, Uttarakhand, engaged in APIs and Dosage manufacturing, respectively. The manufacturing location at Mysore is spread over 69 acres and it’s a USFDA approved site engaged in manufacturing of APIs, and caters to the sales worldwide. API portfolio focusses on Lifestyle driven Therapeutic Areas (CVS, CNS) and targets complex and newly approved molecules. The company is the market leader in four APIs and is amongst the top three players for another three APIs in its portfolio helping it maintain a high contribution margin. The manufacturing location at Roorkee, Uttarakhand is state of the art facility and is audited and approved by USFDA, Japan PMDA, UK MHRA, TGA, WHO and Brazil ANVISA. This business focusses on B2B model for EU, Canada and emerging markets. Both manufacturing units are backward- integrated and are supported by around 500 research and development professionals based at Noida and Mysore. R&D works on Development of new products in API, Solid Dosage Formulations of Oral Solid, Sterile Injectable, Semi-Solids Ointments, Creams and Liquids. All BA/BE studies are done In house at our 80 Bed facility which is inspected and having approvals /certifications from The Drugs Controller General (India) and has global regulatory accreditations including USFDA, EMEA, ANVISA (Brazil), INFRAMED (Portugal Authority), NPRA(Malaysia), AGES MEA (Austria) for GCP and NABL, CAP accreditations for Path lab services. JGL’s full-fledged Regulatory Affairs & IPR professionals ensures unique portfolio of patents and product filings in regulatory and non-regulatory market. Revenue of Jubilant Pharma is constantly increasing and during the Financial Year 2018 -19 it was INR 53,240 Million as compared to INR 39,950 Million during the Financial Year 2017-18. Kindly refer www.jubilantpharma.com for more information about organization. Document Validation Review the analytical document intended to regulatory submission RM/PM Review the raw material specification and standard test procedure (STP) in compliance with pharmacopoeia. Review the Packaging material specification & STP in compliance with pharmacopoeia Manufacturing Review of batch manufacturing & batch packaging record intended to regulatory filing Analytical Method Validation Review the analytical method validation document Analytical Method Transfer Review the analytical method transfer document Process and Equipment Validation Review of Process validation, cleaning validation and Equipment qualification documents
Posted 2 days ago
8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Experience: 8+ Years Location: Hyderabad, Telangana Mandate Skills: very strong API automation skills, API, Web UI, and Mobile test automation, building robust automation frameworks, integrating with CI/CD pipelines, and driving quality across multiple platforms. Key Responsibilities API Automation Design and maintain API automation frameworks (REST/SOAP) using tools like REST Assured, Karate, or Postman/Newman. Validate functional, performance, and security aspects of APIs. UI Automation Develop and maintain web UI automation scripts using Selenium, Cypress, or Playwright. Ensure cross-browser and responsive testing coverage. Mobile Automation Create and manage mobile test automation frameworks using Appium, Espresso, or XCUITest. Test across iOS and Android platforms for functionality, performance, and usability. General Responsibilities Build and execute automated regression, smoke, and integration test suites. Integrate automation with CI/CD pipelines (Jenkins, GitLab, GitHub Actions, Azure DevOps). Collaborate with QA, Dev, and Product teams to define automation strategy and coverage. Monitor, report, and analyze test execution results and quality metrics. Ensure scalable, maintainable, and reusable test scripts across projects. Required Skills & Qualifications Bachelor's/Master's degree in Computer Science, Engineering, or related field. 4+ years of QA automation experience across API, UI, and Mobile platforms. Strong programming skills in Java, Python, or JavaScript/TypeScript. Expertise with automation tools: API → REST Assured, Karate, Postman/Newman. UI → Selenium, Cypress, Playwright. Mobile → Appium, Espresso, XCUI Test. Proficient in Git, CI/CD pipelines, Docker/Kubernetes. Solid understanding of SDLC, STLC, Agile/Scrum methodologies. Experience with BDD frameworks (Cucumber, JBehave) and test reporting tools (Allure, Extent Reports). Familiarity with cloud-based device farms (Browser Stack, Sauce Labs, AWS Device Farm). Good to Have Experience with performance and load testing (JMeter, Gatling). Exposure to cloud platforms (AWS, GCP, Azure). Knowledge of security testing practices for APIs and mobile apps. Familiarity with monitoring tools (Grafana, Kibana, Datadog) for test observability.
Posted 2 days ago
10.0 years
10 - 45 Lacs
hyderabad, telangana, india
On-site
Experience: 10-15 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Solution Architect, Gen AI, LLM’s, AI/ML, Python, Azure Cloud (Databricks, DataFactory, Azure Purview) or GCP (Big Query, Vertex.AI,Gemini) . Domain - BFSI, Retail, Supply Chain, OR Manufacturing. Role Overview We are seeking a highly experienced Principal Solution Architect to lead the design, development, and implementation of sophisticated cloud-based data solutions for our key clients. The ideal candidate will possess deep technical expertise across multiple cloud platforms (AWS, Azure, GCP), data architecture paradigms, and modern data technologies. You will be instrumental in shaping data strategies, driving innovation through areas like GenAI and LLMs, and ensuring the successful delivery of complex data projects across various industries. Required Qualifications & Skills Experience: 10+ years of experience in IT, with a significant focus on data architecture, solution architecture, and data engineering. Proven experience in a principal-level or lead architect role. Cloud Expertise: Deep, hands-on experience with major cloud platforms: Azure: (Microsoft Fabric, Data Lake, Power BI, Data Factory, Azure Purview ), good understanding of Azure Service Foundry, Agentic AI, copilot GCP: (Big Query, Vertex.AI,Gemini ) Data Science Leadership: Understanding and experience in integrating AI/ML capabilities, including GenAI and LLMs, into data solutions. Leadership & Communication: Exceptional communication, presentation, and interpersonal skills. Proven ability to lead technical teams and manage client relationships. Problem-Solving: Strong analytical and problem-solving abilities with a strategic mindset. Education: Bachelor’s or master’s degree in computer science, Engineering, Information Technology, or a related field. Key Responsibilities Solution Design & Architecture: Lead the architecture and design of robust, scalable, and secure enterprise-grade data solutions, including data lakes, data warehouses, data mesh, and real-time data pipelines on AWS, Azure, and GCP. Client Engagement & Pre-Sales: Collaborate closely with clients to understand their business challenges, translate requirements into technical solutions, and present compelling data strategies. Support pre-sales activities, including proposal development and solution demonstrations. Data Strategy & Modernization: Drive data and analytics modernization initiatives, leveraging cloud-native services, Big Data technologies, GenAI, and LLMs to deliver transformative business value. Industry Expertise: Apply data architecture best practices across various industries (e.g., BFSI, Retail, Supply Chain, Manufacturing). Preferred Qualifications Relevant certifications in AWS, Azure, GCP, Snowflake, or Databricks. Experience with Agentic AI, hyper-intelligent automation Skills: data,azure,architecture,cloud,gcp,aws,data architecture,data solutions,design,ml,solution architecture,gen ai,llms,ai,python,azure cloud,azure datafactory,azure databricks,data science,problem solving
Posted 2 days ago
5.0 - 8.0 years
5 - 25 Lacs
hyderabad, telangana, india
On-site
Experience: 5-8 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Python, Django, Flask, Microservices Architecture, Snowflake, SQL, Restful API, Kubernetes, and Any Cloud (AWS, Azure or GCP) Role Overview We are looking for experienced Modern Microservice Developers to join our team and contribute to the design, development, and optimization of scalable microservices and data processing workflows. The ideal candidate will have expertise in Python, containerization, and orchestration tools, along with strong skills in SQL and data integration. Key Responsibilities Develop and optimize data processing workflows and large-scale data transformations using Python. Write and maintain complex SQL queries in Snowflake to support efficient data extraction, manipulation, and aggregation. Integrate diverse data sources and perform validation testing to ensure data accuracy and integrity. Design and deploy containerized applications using Docker, ensuring scalability and reliability. Build and maintain RESTful APIs to support microservices architecture. Implement CI/CD pipelines and manage orchestration tools such as Kubernetes or ECS for automated deployments. Monitor and log application performance, ensuring high availability and quick issue resolution. Required Skills 5-8 years of experience in Python development, with a focus on data processing and automation. Proficiency in SQL, with hands-on experience in Snowflake. Strong experience with Docker and containerized application development. Solid understanding of RESTful APIs and microservices architecture. Familiarity with CI/CD pipelines and orchestration tools like Kubernetes or ECS. Knowledge of logging and monitoring tools to ensure system health and performance. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Education Bachelor's degree in Computer Science, Engineering, or a related field. Skills: data,microservices,python,sql,architecture,data processing,kubernetes,orchestration,processing,skills,developer,docker,django,flask,aws,azure,snowflake,restful apis
Posted 2 days ago
5.0 years
0 Lacs
india
On-site
Overview We are seeking a technically strong DNS Specialist to join our global team of architects and engineers responsible for all aspects of naming services across a large enterprise. This includes DNS, DHCP, IPAM, DNS security, operations, governance, engineering, and architecture. This role is not focused on day-to-day operations. Instead, it emphasizes technical contribution to the redesign of global DNS infrastructure, ensuring scalability, consistency, and resiliency across hybrid and multi-cloud environments. You’ll work alongside experts shaping the future of enterprise DNS while supporting operational teams through design validation, documentation, and automation. Key Responsibilities Collaborate with a cross-functional team of architects and engineers to assess and redesign the global DNS/DHCP environment. Contribute technical expertise to the design and documentation of a unified, enterprise-scale DNS architecture. Develop and execute validation and testing strategies to ensure designs are robust and operationally sound. Troubleshoot and analyze DNS/DHCP design-related issues in complex hybrid environments . Provide guidance on integration of DNS/DHCP with enterprise network services and cloud platforms . Support development of automation strategies for DNS/DHCP testing, validation, and lifecycle management. Participate in governance and security initiatives to ensure DNS infrastructure meets enterprise standards. Help deliver and improve DNS management tools used across multi-platform environments , including on-premises and cloud . Required Skills Strong foundation in DNS protocols and their operational behaviors. 4–5 years of hands-on DNS/DHCP experience in enterprise environments . Solid background in network engineering (routing, switching, TCP/IP fundamentals). Strong knowledge of DNS record types (A, AAAA, PTR, CNAME, MX, SRV, TXT, NS, SOA, Glue, etc.). Experience with DNS/DHCP in hybrid and multi-cloud environments (Azure DNS, GCP Cloud DNS, AWS Route 53). Preferred Skills Previous experience with DDI solutions and management tools . Familiarity with tools like Windows DNS a plus . Experience with automation and scripting (Python, PowerShell, Ansible). Background in infrastructure redesign or migration projects . Soft Skills Strong problem-solving and troubleshooting abilities. Ability to translate complex technical concepts into clear documentation. Excellent communication and collaboration skills across global teams. Strategic thinker with a focus on technical validation and testing .
Posted 2 days ago
5.0 - 8.0 years
5 - 25 Lacs
hyderabad, telangana, india
On-site
Experience: 5-8 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Python, SQL, ETL, Numpy / Scikit-learn / Pandas, AI/ML, Gen AI, TensorFlow, PyTorch, AI frameworks like Autogen, Langgraph, CrewAI, Agentforce, Machine Learning, Predictive Machine Learning and LLM. Job Description We are seeking an experienced and driven Senior AI/ML Engineer with 5-8 years of experience in AI/ML – Predictive ML, GenAI and Agentic AI. The ideal candidate should have a strong background in developing and deploying machine learning models, as well as a passion for innovation and problem-solving. Required Qualifications & Skills Bachelor’s or Master’s degree in computer science / AIML / Data Science. 5 to 8 years of overall experience and hands-on experience with the design and implementation of Machine Learning models, Deep Learning models, and LLM models for solving business problems. Proven experience working with Generative AI technologies, including prompt engineering, fine-tuning large language models (LLMs), embeddings, vector databases (e.g., FAISS, Pinecone), and Retrieval-Augmented Generation (RAG) systems. Expertise in Python (NumPy, Scikit-learn, Pandas), TensorFlow, PyTorch, transformers (e.g., Hugging Face), or MLlib. Experience in working on Agentic AI frameworks like Autogen, Langgraph, CrewAI, Agentforce etc Expertise in cloud-based data and AI solution design and implementation using GCP / AWS / Azure, including the use of their Gen AI services. Good experience in building complex and scalable ML and Gen AI solutions and deploying them into production environments. Experience with scripting in SQL, extracting large datasets, and designing ETL flows. Excellent problem-solving and analytical skills with the ability to translate business requirements into data science and Gen AI solutions. Effective communication skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders. Key Responsibilities As an AI/ML Solution Engineer, build AI/ML, Gen AI, Agentic AI empowered practical in-depth solutions for solving customer’s business problems. As an AI/ML Solution Engineer, design, develop, and deploy machine learning models and algorithms. Conduct research and stay up-to-date with the latest advancements in AI/ML, GenAI, and Agentic AI. Lead a team of junior AI engineers, providing direction and support. Skills: ml,learning,models,machine learning,skills,data,design,machine learning models,data science,etl,ai,sql,python,numpy,scikit-learn,pandas,gen ai,tensorflow,pytorch,ai framework,autogen,langgraph,crewai,agent force,llm
Posted 2 days ago
4.0 - 6.0 years
10 - 14 Lacs
bengaluru, delhi / ncr
Hybrid
Strong in Node.js, Express.js, MongoDB, CSS, HTML Minimum 6 months contract.
Posted 2 days ago
0 years
0 Lacs
guntur east, andhra pradesh, india
On-site
Job Description We are seeking a Research Analyst (Scientific Writing) to join our team and support the development of high-quality scientific documents, publications, and research-based content. The ideal candidate will have a strong background in life sciences, biotechnology, or pharmaceuticals, combined with excellent skills in scientific writing, data interpretation, and literature analysis. This role involves transforming complex scientific data into clear, accurate, and impactful content for research reports, white papers, regulatory documents, manuscripts, and client deliverables. Responsibilities: Conduct in-depth literature reviews and gather relevant scientific data from peer-reviewed journals, databases, and clinical reports. Analyze and synthesize scientific information to prepare well-structured documents, including research summaries, manuscripts, white papers, and regulatory content. Collaborate with scientists, subject matter experts, and cross-functional teams to ensure accuracy, clarity, and consistency of written materials. Prepare data-driven reports and presentations for internal and external stakeholders. Ensure compliance with scientific, ethical, and regulatory writing standards (e.g., ICH, GCP, CONSORT, or similar). Stay updated with current developments in life sciences, healthcare, and biotechnology sectors. Requirements: Master's or PhD in Life Sciences, Biotechnology, Pharmacy, Medicine, or related field. Proven experience in scientific/medical writing or research analysis. Strong ability to critically analyze data and translate complex concepts into clear, concise narratives. Proficiency with scientific databases (PubMed, Embase, Scopus, etc.) and reference management tools (EndNote, Mendeley, Zotero). Excellent written and verbal communication skills with attention to detail. Familiarity with statistical analysis and clinical trial design is a plus. Ability to work independently, manage deadlines, and handle multiple projects. What We Offer: Opportunity to work on impactful projects in the life sciences and healthcare sector. A collaborative environment with scientists, researchers, and industry experts. Professional development and training in scientific writing and research methodology. Competitive compensation package with growth opportunities.
Posted 2 days ago
10.0 years
10 - 45 Lacs
pune, maharashtra, india
On-site
Experience: 10-15 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Solution Architect, Gen AI, LLM’s, AI/ML, Python, Azure Cloud (Databricks, DataFactory, Azure Purview) or GCP (Big Query, Vertex.AI,Gemini) . Domain - BFSI, Retail, Supply Chain, OR Manufacturing. Role Overview We are seeking a highly experienced Principal Solution Architect to lead the design, development, and implementation of sophisticated cloud-based data solutions for our key clients. The ideal candidate will possess deep technical expertise across multiple cloud platforms (AWS, Azure, GCP), data architecture paradigms, and modern data technologies. You will be instrumental in shaping data strategies, driving innovation through areas like GenAI and LLMs, and ensuring the successful delivery of complex data projects across various industries. Required Qualifications & Skills Experience: 10+ years of experience in IT, with a significant focus on data architecture, solution architecture, and data engineering. Proven experience in a principal-level or lead architect role. Cloud Expertise: Deep, hands-on experience with major cloud platforms: Azure: (Microsoft Fabric, Data Lake, Power BI, Data Factory, Azure Purview ), good understanding of Azure Service Foundry, Agentic AI, copilot GCP: (Big Query, Vertex.AI,Gemini ) Data Science Leadership: Understanding and experience in integrating AI/ML capabilities, including GenAI and LLMs, into data solutions. Leadership & Communication: Exceptional communication, presentation, and interpersonal skills. Proven ability to lead technical teams and manage client relationships. Problem-Solving: Strong analytical and problem-solving abilities with a strategic mindset. Education: Bachelor’s or master’s degree in computer science, Engineering, Information Technology, or a related field. Key Responsibilities Solution Design & Architecture: Lead the architecture and design of robust, scalable, and secure enterprise-grade data solutions, including data lakes, data warehouses, data mesh, and real-time data pipelines on AWS, Azure, and GCP. Client Engagement & Pre-Sales: Collaborate closely with clients to understand their business challenges, translate requirements into technical solutions, and present compelling data strategies. Support pre-sales activities, including proposal development and solution demonstrations. Data Strategy & Modernization: Drive data and analytics modernization initiatives, leveraging cloud-native services, Big Data technologies, GenAI, and LLMs to deliver transformative business value. Industry Expertise: Apply data architecture best practices across various industries (e.g., BFSI, Retail, Supply Chain, Manufacturing). Preferred Qualifications Relevant certifications in AWS, Azure, GCP, Snowflake, or Databricks. Experience with Agentic AI, hyper-intelligent automation Skills: data,azure,architecture,cloud,gcp,aws,data architecture,data solutions,design,ml,solution architecture,gen ai,llms,ai,python,azure cloud,azure datafactory,azure databricks,data science,problem solving
Posted 2 days ago
5.0 years
0 Lacs
pune, maharashtra, india
On-site
Hello Connections, Our is Client is a largest Top 5 Software giant in India, with over 11.3 USD billion dollars revenue, Global work force 2,40,000 employees, It delivers end-to-end technology, consulting, and business process services to clients across the globe, Presence: 60+ countries and Publicly traded company NSE & BSE (India), NYSE (USA). Job Title: Full Stack Python Developer · Location: pune · Experience: 5+ Year to 8year(relevant in mainframe Full stack Python Developer 6Year) · Job Type : Contract to hire. Work Mode : Work from Office (5day) · Notice Period:- Immediate joiners(who can able to join sep 1st week) Mandatory Skills: Python 3 Angular Cloud technology (GCP / Azure / AWS). Understand Docker, Kubernetes. Essential Responsibilities and Duties: Strong Experience in Python and/or Java: Proven experience (5+ years) in backend development with Python or Java, focusing on building scalable and maintainable applications. Angular Development Expertise: Strong hands-on experience in developing modern, responsive web applications using Angular. Microservices Architecture: In-depth knowledge of designing, developing, and deploying microservices-based architectures. DevOps Understanding: Good understanding of DevOps practices, CI/CD pipelines, and tools to automate deployment and operations. Problem-Solving Skills: Ability to investigate, analyse, and resolve complex technical issues efficiently. Adaptability: Strong aptitude for learning and applying new technologies in a fast-paced environment. Cloud Environments Knowledge: Hands-on experience with at least one cloud platform (GCP, Azure, AWS). Containerization Technologies: Experience working with container technologies like Kubernetes and Docker for application deployment and orchestration. Technology: Python 3 Angular Cloud technology (GCP / Azure / AWS). Understand Docker, Kubernetes. Previous Experience and Competencies: Bachelor’s degree in IT related discipline Strong computer literacy with aptitude and readiness for multidiscipline training 5 – 8 years seniority (Senior and Hands on) Preferred Qualifications Strong in Software Engineering. Interest in designing, analysing and troubleshooting large-scale distributed systems. Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. Ability to debug and optimize code and automate routine tasks. Good to have: Familiarity with data integration platforms like Dataiku or industrial data platforms like Cognite would be a bonus
Posted 2 days ago
15.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Job Description Role Summary We are looking for a Generative AI/ML Lead with advanced cloud architecture expertise , deep knowledge of RAG and agentic AI patterns , and hands-on leadership in delivering enterprise-grade AI solutions. This role combines strategic vision with technical depth , leading a team of engineers to implement AI-powered applications at scale, including edge and hybrid deployments . Key Responsibilities Strategize GenAI or ML solution architecture across cloud, edge, and hybrid environments. Experience in evaluating LLM applications and developing observability frameworks Research and implement latest technology and frameworks Experience working with diffusion models like Stable Diffusion, MidJourney, Runway, Imagen, Veo etc. Experience in working with structured data along with LLMs frameworks Lead design and implementation of RAG-based knowledge systems and agentic AI workflows . Select and integrate enterprise-grade AI models (OpenAI, Anthropic, Mistral, LLaMA, custom fine-tuned models). Drive cloud-native AI deployments leveraging AWS, Azure, and GCP AI services. Architect LLMOps pipelines for scalable AI model lifecycle management. Implement AI governance, risk management, and compliance frameworks for regulated industries. Lead PoCs and pilots , then guide them to production-grade rollouts. Evaluate and optimize AI cost, performance, and security trade-offs. Mentor and manage AI engineers, setting best practices for RAG, agentic, and edge AI . Be responsible and own delivery outcomes for all GenAI initiatives, ensuring scope, timelines, and quality targets are met. Collaborate with business stakeholders to translate needs into AI-powered products . Establish AI monitoring and observability (drift detection, hallucination tracking, usage analytics). Required Skills 8–15 years of technology experience, with 4+ years in AI/ML and 2+ years in GenAI . Proven leadership in cloud AI architectures (AWS Bedrock/SageMaker, Azure OpenAI, GCP Vertex AI). Strong expertise in RAG architectures, embeddings, and semantic search . Experience in agentic AI frameworks (LangGraph Agents, Autogen, CrewAI, OpenAI/Google Agent SDK). Advanced knowledge of vector stores, distributed search, and multi-modal AI . Proficiency in edge AI deployments and low-latency AI inference optimization. Expertise in cloud networking, identity, and security for AI workloads. Strong understanding of AI product lifecycle , from ideation to production. Excellent stakeholder management and team leadership skills. Step in as a hands-on problem solver to debug, optimize, or redesign solutions when engineers encounter roadblocks. Conduct code and architecture reviews to maintain engineering excellence. Preferred Skills Experience in build, test and deploy various ML models Experience in building MCP, A2A protocol Experience with AI marketplaces and model hosting . Multi-cloud AI cost optimization strategies. Contributions to AI architecture standards in enterprise settings.
Posted 2 days ago
6.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Senior Systems Integrator Lead About the Role : We're seeking an experienced Senior Systems Integrator Lead to spearhead the integration of our cutting-edge LLM solutions with diverse enterprise systems. This is a technical leadership role where you'll be hands-on in architecting, building, and deploying complex integration solutions while providing guidance and mentorship to a team of engineers. You'll be at the forefront of connecting disparate systems, orchestrating seamless LLM integrations, and establishing best practices for AI-driven system architecture. The ideal candidate combines deep technical expertise in systems integration with proven leadership capabilities and extensive experience in LLM/Generative AI implementations. Key Responsibilities : Technical Leadership & Team Guidance Lead Integration Architecture: Design and oversee complex, multi-system integration strategies that seamlessly connect LLM solutions with existing enterprise infrastructure Team Technical Guidance: Mentor and guide development teams on integration best practices, code architecture patterns, and LLM implementation strategies Hands-on Development: Remain technically hands-on, writing code, conducting code reviews, and troubleshooting complex integration challenges Standards & Best Practices: Establish and enforce integration standards, development workflows, and quality assurance processes LLM & AI Integration Expertise: Advanced LLM Integration: Design and implement sophisticated integration patterns for various LLM providers (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, etc.) AI Pipeline Architecture: Build robust, scalable pipelines for prompt engineering, response processing, and model orchestration Performance Optimization: Optimize LLM integration performance including token management, caching strategies, and response time optimization Multi-modal AI Integration: Integrate text, image, and other AI modalities into existing business workflows Systems Integration & Architecture: Enterprise Integration Patterns: Implement complex integration solutions using APIs, message queues, ETL/ELT pipelines, and event-driven architectures Microservices Architecture: Design and maintain microservices-based integration layers with proper service mesh, API gateway, and monitoring implementations Cloud-Native Solutions: Architect cloud-native integration solutions leveraging containers, serverless functions, and managed services Data Flow Management: Ensure secure, efficient data flow between systems while maintaining data integrity and compliance requirements Full-Stack Development & UI Integration: React.js Applications: Build sophisticated front-end applications using React.js that interface with LLM backends and integrated enterprise systems API Development: Design and implement RESTful and GraphQL APIs that serve as integration points between systems Real-time Features: Implement real-time capabilities for AI interactions using WebSockets, Server-Sent Events, or similar technologies Collaboration & Communication: Cross-functional Leadership: Work with product managers, data scientists, DevOps teams, and business stakeholders to translate requirements into technical solutions Technical Documentation: Create comprehensive architecture documentation, integration guides, and system design specifications Knowledge Sharing: Conduct technical sessions, workshops, and knowledge transfer meetings with team members and stakeholders Key Experiences Experience & Leadership: 6-8+ years of systems integration experience with 2+ years in technical leadership roles Proven team leadership experience including mentoring junior developers and leading technical initiatives 3+ years hands-on experience with LLM integration, Generative AI implementations, and AI/ML pipeline development Technical Skills: LLM Integration Expertise: Deep experience with major LLM providers' APIs, prompt engineering, fine-tuning, and deployment strategies Integration Technologies: Advanced knowledge of REST/GraphQL APIs, message brokers (Kafka, RabbitMQ), ETL tools, and integration platforms Cloud Platforms: Proficiency with AWS, Azure, or GCP, including serverless architectures, container orchestration, and managed AI services React.js Mastery: Strong expertise in React.js, modern JavaScript (ES6+), TypeScript, and state management libraries Database Integration: Experience with both SQL and NoSQL databases, data modeling, and database integration patterns DevOps & Monitoring: Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes), and observability tools Architecture & Design Software Architecture: Strong understanding of microservices, event-driven architectures, and distributed system design patterns Security & Compliance: Knowledge of API security, data encryption, and compliance frameworks (SOC2, GDPR, etc.) Performance Engineering: Experience in system performance optimization, load balancing, and scalability planning Soft Skills Technical Communication: Excellent ability to communicate complex technical concepts to both technical and business stakeholders Problem-Solving: Strong analytical and troubleshooting skills with a solutions-oriented mindset Adaptability: Comfortable working in fast-paced environments with evolving requirements and emerging technologies Preferred Experience Experience with vector databases and semantic search implementations Knowledge of prompt engineering frameworks and AI agent architectures Background in enterprise software integration (SAP, Salesforce, ServiceNow, etc.) Experience with infrastructure-as-code (Terraform, CloudFormation) Previous experience in AI/ML product development or consulting What You'll Bring to the Team Technical expertise that can tackle the most complex integration challenges Leadership skills to guide and grow a high-performing engineering team Strategic thinking to align technical solutions with business objectives Hands-on mentality with the ability to dive deep into code when needed Innovation mindset to explore and implement cutting-edge AI integration patterns
Posted 2 days ago
5.0 - 8.0 years
5 - 25 Lacs
pune, maharashtra, india
On-site
Experience: 5-8 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Python, SQL, ETL, Numpy / Scikit-learn / Pandas, AI/ML, Gen AI, TensorFlow, PyTorch, AI frameworks like Autogen, Langgraph, CrewAI, Agentforce, Machine Learning, Predictive Machine Learning and LLM. Job Description We are seeking an experienced and driven Senior AI/ML Engineer with 5-8 years of experience in AI/ML – Predictive ML, GenAI and Agentic AI. The ideal candidate should have a strong background in developing and deploying machine learning models, as well as a passion for innovation and problem-solving. Required Qualifications & Skills Bachelor’s or Master’s degree in computer science / AIML / Data Science. 5 to 8 years of overall experience and hands-on experience with the design and implementation of Machine Learning models, Deep Learning models, and LLM models for solving business problems. Proven experience working with Generative AI technologies, including prompt engineering, fine-tuning large language models (LLMs), embeddings, vector databases (e.g., FAISS, Pinecone), and Retrieval-Augmented Generation (RAG) systems. Expertise in Python (NumPy, Scikit-learn, Pandas), TensorFlow, PyTorch, transformers (e.g., Hugging Face), or MLlib. Experience in working on Agentic AI frameworks like Autogen, Langgraph, CrewAI, Agentforce etc Expertise in cloud-based data and AI solution design and implementation using GCP / AWS / Azure, including the use of their Gen AI services. Good experience in building complex and scalable ML and Gen AI solutions and deploying them into production environments. Experience with scripting in SQL, extracting large datasets, and designing ETL flows. Excellent problem-solving and analytical skills with the ability to translate business requirements into data science and Gen AI solutions. Effective communication skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders. Key Responsibilities As an AI/ML Solution Engineer, build AI/ML, Gen AI, Agentic AI empowered practical in-depth solutions for solving customer’s business problems. As an AI/ML Solution Engineer, design, develop, and deploy machine learning models and algorithms. Conduct research and stay up-to-date with the latest advancements in AI/ML, GenAI, and Agentic AI. Lead a team of junior AI engineers, providing direction and support. Skills: ml,learning,models,machine learning,skills,data,design,machine learning models,data science,etl,ai,sql,python,numpy,scikit-learn,pandas,gen ai,tensorflow,pytorch,ai framework,autogen,langgraph,crewai,agent force,llm
Posted 2 days ago
4.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Bolt is on a mission to democratize commerce. We relentlessly prioritize our retailers—putting their brands front and center while enabling frictionless shopping at any touchpoint in the shopper journey. At the center of it all is our rapidly growing universal shopper network—Bolt merchants such as Lucky, Revolve, and FansEdge (Fanatics) can access tens of millions of shoppers, offering them a best-in-class checkout. And revolutionizing ecommerce is only half of the equation—we’re also transforming the way we work. At Bolt, we have created a work environment where people learn to drive impact, take risks, make big bets, and grow from feedback, all while feeling welcomed and accepted for who they are. Come join us on the adventure today! Please note, this is a contract-to-hire opportunity with the option to convert to full-time after 3–6 months. Key Responsibilities: Full-Stack Development: Design, develop, and maintain backend services in Go and frontend applications in TypeScript. Scalability & Performance: Optimize our PostgreSQL, BigQuery, and Elasticsearch databases for high availability and performance. Technical Leadership: Lead incident investigations and on-call rotations, ensuring system reliability and performance. Merchant Collaboration: Communicate directly with merchants to understand their needs, troubleshoot issues, and propose scalable solutions. Security & Fraud Prevention: Implement and maintain fraud detection tools, card tokenization, blocklisting, and compliance with card network standards. API Development: Build and maintain REST APIs and webhooks to integrate with external commerce platforms such as Magento 2, Big Commerce, Salesforce Demandware Observability : Use Datadog for logging, monitoring, and tracing across the system to ensure update and management of alerts Automation & Pattern Recognition: Identify inefficiencies, automate workflows, and develop proactive monitoring solutions. Requirements Must Have: 4+ years of experience with AWS, GCP, or similar cloud infrastructure provider Experience with testing frameworks such as Jest, Mocha for frontend, and Ginkgo Strong knowledge of one of Python, TypeScript, GoLang, Java, or similar languages Knowledge of database development like Postgres, DynamoDB, and BigQuery Comfortable thinking about infrastructure as code Experience with Terraform or similar tools is a plus Nice to Have: Familiarity with Kubernetes and cloud platforms (AWS, GCP, or Azure). Experience with event-driven architecture and message queues (Kafka, RabbitMQ, etc.). Development experience with commerce platforms such as Magento 2, Big Commerce, SFCC Demandware, etc. Experience with developer experience tooling such as Scalar, Redoc Knowledge of compliance and security best practices in fintech. Please note, this is a contract-to-hire opportunity with the option to convert to full-time after 3–6 months.
Posted 2 days ago
5.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Senior DevOps Engineer (Python) Experience: 5+ years Salary: Up to ₹1,00,000 per month About the Role We are looking for a skilled Senior DevOps Engineer with strong expertise in Python automation and cloud infrastructure. You will be responsible for building scalable systems, automating workflows, and ensuring smooth CI/CD pipelines in cloud-native environments. Key Responsibilities Automate infrastructure provisioning, deployment, and monitoring using Python. Build and maintain CI/CD pipelines (Jenkins, GitLab CI, CircleCI, etc.). Design and optimize cloud infrastructure (AWS, GCP, Azure). Work with Docker & Kubernetes for containerization and orchestration. Develop custom Python scripts and tools for monitoring and automation. Implement monitoring systems (Prometheus, Grafana) with custom alerts. Apply DevSecOps best practices to automate security and compliance. Troubleshoot production issues and improve system reliability. Required Skills 5+ years as a DevOps Engineer with Python scripting expertise . Strong experience in AWS/GCP/Azure services (EC2, S3, Lambda, Kubernetes). Hands-on with Docker, Kubernetes, and Infrastructure as Code (Terraform, Ansible). CI/CD automation experience with Python. Familiarity with monitoring tools (Prometheus, Grafana, ELK Stack). Good knowledge of Git, networking, and security best practices. Preferred Qualifications DevOps or Python-related certifications. Knowledge of microservices, GitOps, Helm charts, and serverless technologies. Experience with SQL/NoSQL databases in cloud environments.
Posted 2 days ago
5.0 - 8.0 years
5 - 25 Lacs
pune, maharashtra, india
On-site
Experience: 5-8 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Python, Django, Flask, Microservices Architecture, Snowflake, SQL, Restful API, Kubernetes, and Any Cloud (AWS, Azure or GCP) Role Overview We are looking for experienced Modern Microservice Developers to join our team and contribute to the design, development, and optimization of scalable microservices and data processing workflows. The ideal candidate will have expertise in Python, containerization, and orchestration tools, along with strong skills in SQL and data integration. Key Responsibilities Develop and optimize data processing workflows and large-scale data transformations using Python. Write and maintain complex SQL queries in Snowflake to support efficient data extraction, manipulation, and aggregation. Integrate diverse data sources and perform validation testing to ensure data accuracy and integrity. Design and deploy containerized applications using Docker, ensuring scalability and reliability. Build and maintain RESTful APIs to support microservices architecture. Implement CI/CD pipelines and manage orchestration tools such as Kubernetes or ECS for automated deployments. Monitor and log application performance, ensuring high availability and quick issue resolution. Required Skills 5-8 years of experience in Python development, with a focus on data processing and automation. Proficiency in SQL, with hands-on experience in Snowflake. Strong experience with Docker and containerized application development. Solid understanding of RESTful APIs and microservices architecture. Familiarity with CI/CD pipelines and orchestration tools like Kubernetes or ECS. Knowledge of logging and monitoring tools to ensure system health and performance. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Education Bachelor's degree in Computer Science, Engineering, or a related field. Skills: data,microservices,python,sql,architecture,data processing,kubernetes,orchestration,processing,skills,developer,docker,django,flask,aws,azure,snowflake,restful apis
Posted 2 days ago
5.0 years
0 Lacs
karnataka, india
On-site
Who You’ll Work With At Nike, we leverage the power of data and technology to serve athletes* around the world. The Data & AI (DAI) team is at the forefront of this mission—building scalable, secure, and intelligent platforms that power decision-making and personalized experiences across the Nike ecosystem. Who We Are Looking For We are looking for an experienced software engineer with a passion for building robust and scalable data solutions. You thrive in a fast-paced, collaborative environment and have a proven track record of delivering high-quality software. Skillset Required 5+ years of experience in software development, with a strong foundation in distributed systems, cloud-native architectures, and data platforms. Experience with various stages of software development, including design, development, testing, and deployment. Strong programming skills in Python, Java, C++, SQL etc. Deep expertise with cloud platforms like AWS, Azure, or GCP. Hands-on experience with Databricks, Snowflake, AWS RDS, Azure SQL, or GCP Cloud SQL, Apache Spark and Apache Airflow. Strong understanding of Lakehouse architecture, Data Mesh principles, data governance frameworks, and modern data pipelines. Proven ability to deliver high-impact, API-first services at scale. Proficiency with tools like Terraform or AWS CloudFormation is critical for managing cloud infrastructure programmatically, ensuring consistency, and enabling automation. Strong problem-solving and analytical thinking skills. Experience in guiding and supporting junior engineers. Excellent communication and collaboration skills. What You’ll Work On Design, develop, and maintain scalable data solutions. Build and optimize data pipelines, ensuring data quality and reliability. Develop SDKs, APIs, and microservices to support enterprise-wide data and analytics needs. Collaborate with cross-functional teams including product managers, data scientists, and other engineers to deliver high-impact solutions. Implement best practices in software development, data governance, and platform observability. Participate in code reviews, provide feedback, and mentor junior engineers.
Posted 2 days ago
5.0 years
0 Lacs
karnataka, india
On-site
Who You’ll Work With At Nike, we leverage the power of data and technology to serve athletes* around the world. The Data & AI (DAI) team is at the forefront of this mission—building scalable, secure, and intelligent platforms that power decision-making and personalized experiences across the Nike ecosystem. Who We Are Looking For We are looking for an experienced software engineer with a passion for building robust and scalable data platforms. You thrive in a fast-paced, collaborative environment and have a proven track record of delivering high-quality software. Skillset Required 5+ years of experience in software development, with a strong foundation in distributed systems, cloud-native architectures, and data platforms. Experience with various stages of software development, including design, development, testing, and deployment. Strong programming skills in Python, Java, C++, SQL etc. Deep expertise with cloud platforms like AWS, Azure, or GCP. Hands-on experience with Databricks, Snowflake, AWS RDS, Azure SQL, or GCP Cloud SQL, Apache Spark and Apache Airflow. Strong understanding of Lakehouse architecture, Data Mesh principles, data governance frameworks, and modern data pipelines. Proven ability to deliver high-impact, API-first services at scale. Proficiency with tools like Terraform or AWS CloudFormation is critical for managing cloud infrastructure programmatically, ensuring consistency, and enabling automation. Strong problem-solving and analytical thinking skills. Experience in guiding and supporting junior engineers. Excellent communication and collaboration skills. What You’ll Work On Design, develop, and maintain scalable data platforms and services. Develop SDKs, APIs, and microservices to support enterprise-wide data and analytics needs. Collaborate with cross-functional teams including product managers, and other engineers to deliver high-impact solutions. Implement best practices in software development, data governance, and platform observability. Participate in code reviews, provide feedback, and mentor junior engineers.
Posted 2 days ago
10.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role : Azure Data Architect Experience: 10+ years Location: Hyderabad Position : Permanent We are looking for an experienced Azure Data Architect to lead the design and delivery of enterprise-scale data solutions. This is a hands-on technical leadership role where you will be responsible for shaping data architectures, guiding teams, and ensuring our solutions are robust, scalable, and aligned with industry best practices. You will work with Microsoft Azure and Fabric as the core technology stack, and help bring in AI and advanced analytics capabilities where they add business value. Job Responsibilities To collaborate with various teams/regions in driving facilitating data design, identifying architectural risks and key areas of improvement in data landscape, and developing and refining data models and architecture frameworks Work closely with stakeholders to understand business requirements and translate them into practical, data-driven solutions. Contribute to presales efforts such as preparing RFP responses, creating solution proposals, and supporting presentations. Participate in architecture and design workshops to showcase how Azure Data & AI capabilities can meet business needs including components such as AI- Machine Learning, AI – Machine Vision, Gen AI and AI-Modeling expertise. Technical experience and knowledge in Cloud Data Warehousing, data migration and data transformation Develop and test ETL components to high standards of data quality and performance as a hands-on development lead. Familiarity with Databricks, Data Lakes, Data Warehouses, MDM, BI, Dashboards, AI-ML, AI-MV and Gen-AI. Oversee and contribute to the creation and maintenance of relevant data artifacts (data lineages, source to target mappings, high level designs, interface agreements, etc.) in compliance with enterprise level architecture standards. Experience in leading and delivering data centric projects with concentration on Data Quality and adherence to data standards and best practices. Experience in data modelling, metadata support, development and testing for enterprise-wide data solutions. Must have: Bachelor’s degree in a technical field (Comp. Science degree preferred not mandatory) Expertise in ETL, Data architecture and Data modelling Expertise working in Relational and Big Data platforms, including Databricks, and large-scale distributed systems such as Spark. Expertise in Data Engineering, Data Governance, Data Quality, Data Lake, Data Warehousing and Reporting/Analytics concepts Expertise building Data pipelines, Data Lakes and Data Warehouses in Azure (optional as well for AWS and GCP) using Cloud services. Experience in designing Big Data/Cloud Solution Designs, data models. Knowledge of Pyspark, Shell scripting, SQL, Python & some of the standard data science packages Experience in AI-ML/ AI-MV projects. Expertise in Agentic AI. Strong verbal and business communication skills. Good experience interfacing with customers and being able to explain end to end technical proposals. Nice to have : Experience in manufacturing domain (add on) Microsoft certifications such as Azure Solution Architect Expert or DP-500 and MS Fabric DP-700.
Posted 2 days ago
8.0 - 10.0 years
0 Lacs
sadar, uttar pradesh, india
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Infrastructure Specialists at Kyndryl are project-based subject matter experts in all things infrastructure – good at providing analysis, documenting and diagraming work for hand-off, offering timely solutions, and generally “figuring it out.” This is a hands-on role where your feel for the interaction between a system and its environment will be invaluable to every one of your clients. There are two halves to this role: First, contributing to current projects where you analyze problems and tech issues, offer solutions, and test, modify, automate, and integrate systems. And second, long-range strategic planning of IT infrastructure and operational execution. This role isn’t specific to any one platform, so you’ll need a good feel for all of them. And because of this, you’ll experience variety and growth at Kyndryl that you won’t find anywhere else. You’ll be involved early to offer solutions, help decide whether something can be done, and identify the technical and timeline risks up front. This means dealing with both client expectations and internal challenges – in other words, there are plenty of opportunities to make a difference, and a lot of people will witness your contributions. In fact, a frequent sign of success for our Infrastructure Specialists is when clients come back to us and ask for the same person by name. That’s the kind of impact you can have! This is a project-based role where you’ll enjoy deep involvement throughout the lifespan of a project, as well as the chance to work closely with Architects, Technicians, and PMs. Whatever your current level of tech savvy or where you want your career to lead, you’ll find the right opportunities and a buddy to support your growth. Boredom? Trust us, that won’t be an issue. Your future at Kyndryl There are lots of opportunities to gain certification and qualifications on the job, and you’ll continuously grow as a Cloud Hyperscaler. Many of our Infrastructure Specialists are on a path toward becoming either an Architect or Distinguished Engineer, and there are opportunities at every skill level to grow in either of these directions. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Knowledge & Skills Required: 8 to 10 years of experience in Networking Hands-on experience in Data Center, WAN and LAN/Wi-Fi – Design and build Hands-on experience in any Public Cloud Infrastructure (AWS, Azure, GCP, IBM) Hands-on experience in Network Virtualization Platforms like Cisco ACI and NSX-T Experience in creating LLD for network solutions for Cloud Networking, Data Center Networking. In depth knowledge in Load balancing techniques of F5 Competency in Security including Firewalling, VPN, Micro segmentation, IPS/IDS Hands on experience one of firewall – Palo Alto, Fortigate, Juniper or Checkpoint Comprehensive knowledge on IP routing protocols including BGP, OSPF Experience with Network Automation and Scripting Language like terraform, Ansible and Python Self-motivated and Pro-active in troubleshooting and identifying the issues Network Monitoring skills – Thousand eyes, Solar Winds, Splunk Relevant Degree or Experience Desired Certification Networking Certification – CCIE, CCNP or equivalent Firewall Certification – Palo Alto, Fortigate or Juniper Load Balancer certification – F5 Associate Level Cloud certification – AWS/ Azure/GCP Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 days ago
6.0 years
0 Lacs
india
Remote
Job Title: Senior GCP Cloud Engineer/Lead Location: Remote Type: Full-Time Job Description: A role that requires good understanding in cloud-based Infrastructure and application hosting, designing and development. Managing cloud environments considering the security aspects. Good problem-solving skills with the ability to see and solve issues quickly. Strong Interpersonal skills with clear and precise verbal and written communication. Skills Must have's 6+ years' relevant experience in designing and developing Google cloud infrastructure solutions (minimum 3+yrs active in GCP , recently). Cloud Provider: Good understanding and hands on for GCP with exposure to services like (at least 7): Cloud Identity & Identity and Access Management, Role Based Access Control RBAC, Compute Engine, Storage (Cloud Storage, Persistent Disks, Cloud storage-Nearline or Coldline, Cloud File store etc), VPC, Google Load Balancer, Cloud Interconnect, Google Domains, Cloud DNS, Cloud Content Delivery Network, Cloud pub-sub, Stackdriver etc. Aware and experience with Landing Zone Concept with google cloud services (with hub-spoke arch) SCM tools: Git, Bitbucket, GitLab etc. Basic understanding of CI environment for code using Jenkins / GitHub Actions/ GitLab / Azure Devops / Cloud Build-Cloud Deploy. (at least one) Hands on of Bash Shell Scripting. Experience and good Understanding of Terraform scripting. Certification: Associate Cloud Engineer – GCP Monitoring and Event Management (Cloud Monitoring, Grafana, Prometheus) Nice to have Containerization: Good Docker containerization skills and Kubernetes Orchestration (GKE) hands on. Experience with Anthos Experience on SQL and NoSQL Database. Experience on app migration to cloud would be addon
Posted 2 days ago
3.0 years
0 Lacs
india
Remote
About the Role: We are looking for an experienced Process Mining Specialist with strong expertise in Celonis, Apromore, or similar process mining platforms. The ideal candidate will have a solid background in business process management (BPM), process optimization, and automation (BPM/RPA). You will work with stakeholders to analyze business processes, identify inefficiencies, and drive digital transformation initiatives. Notice Period: Less than 30 days Work Location: Remote Key Responsibilities: 1. Implement and configure Celonis / Apromore / process mining tools for end-to-end process visibility. 2. Collect, prepare, and transform data from multiple systems (SAP, Oracle, Salesforce, ServiceNow, custom applications, etc.) for process mining. 3. Define and build process models, KPIs, and dashboards to identify bottlenecks, inefficiencies, and compliance issues. 4. Collaborate with business stakeholders to translate findings into actionable process improvement initiatives. 5. Work closely with BPM / RPA teams (UiPath, Automation Anywhere, Blue Prism, n8n, etc.) to enable data-driven automation. 6. Develop and maintain documentation, training, and best practices for process mining use cases. 7. Stay up-to-date with advancements in process mining, task mining, and AI in automation. Required Skills & Experience: 1. 3+ years of experience in process mining using Celonis, Apromore, UiPath Process Mining, or equivalent tools. 2. Strong knowledge of process analysis, process modeling, and process improvement methodologies (Lean/Six Sigma preferred). 3. Hands-on experience in ETL / SQL / data modeling for preparing event logs and data pipelines. 4. Familiarity with ERP/CRM systems (SAP, Oracle, MS Dynamics, Salesforce, etc.) and their event structures. 5. Experience integrating process mining with BPM/RPA platforms (UiPath, Automation Anywhere, Blue Prism, n8n). 6. Ability to engage with senior stakeholders and present findings in a clear, actionable manner. 7. Strong analytical, problem-solving, and communication skills. Good to Have: 1. Certifications in Celonis Data Engineer, Celonis Analyst, or Apromore certifications. 2. Exposure to task mining, predictive process monitoring, or AI-driven automation use cases. 3. Experience with Python / R / ML techniques for advanced analytics. 4. Familiarity with cloud platforms (AWS, GCP, Azure) for process mining deployments.
Posted 2 days ago
10.0 years
0 Lacs
kochi, kerala, india
On-site
.Net AI Lead/Architect Job location: Kochi Budget: 30-40L (leads/architects, based on role and experience) Experience- 10+yrs Candidates should be good to communicate and presentable. We are seeking a highly skilled and visionary .NET AI Lead/ Architect to lead the design, development, and integration of AI-powered solutions within our enterprise .NET applications. This role requires a deep understanding of .NET architecture and hands-on experience in integrating artificial intelligence and machine learning models into scalable, secure, and performant applications. Key Responsibilities: Design/Architect end-to-end .NET solutions with integrated AI/ML components (e.g., predictive models, NLP, computer vision, recommendation engines). Collaborate with data scientists and ML engineers to integrate trained models (TensorFlow, PyTorch, ONNX, etc.) into .NET-based production environments (e.g., via APIs, containers, or embedded libraries). Define and drive AI integration strategies , including model serving, inferencing pipelines, and continuous learning mechanisms. Lead the development of microservices-based architectures with AI-driven services using .NET Core, C#, and Azure/AWS services. Ensure security, scalability, and performance of AI-enhanced solutions. Stay up to date with emerging trends in AI and .NET ecosystem and bring innovative ideas to the team. Mentor developers on best practices in AI integration and .NET architectural design. Collaborate with stakeholders to translate business requirements into technical designs involving intelligent automation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 10+ years of experience in software architecture and development using .NET/.NET Core (C#). 3+ years of hands-on experience integrating AI/ML solutions into enterprise applications. Strong understanding of ML lifecycle, model deployment (e.g., REST APIs, ONNX Runtime, Azure ML, ML.NET), and inferencing in .NET applications. Good working experience in front end technologies like Angular Experience with cloud platforms (Azure preferred; AWS or GCP acceptable), especially AI-related services (Azure Cognitive Services, AWS SageMaker, etc.). Proficiency in containerization and orchestration technologies like Docker and Kubernetes. Experience in DevOps and CI/CD pipelines for AI/ML deployment. Familiarity with ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and data handling in enterprise environments. Strong understanding of software architecture patterns: microservices, event-driven, domain-driven design (DDD), etc. Excellent problem-solving, communication, and leadership skills.
Posted 2 days ago
0 years
0 Lacs
hyderabad, telangana, india
On-site
Hi All, Greetings form HCLTech!! About US HCL Technologies is a next-generation global technology company that helps enterprises reimagine their businesses for the digital age. Our technology products and services are built on four decades of innovation, with a world-renowned management philosophy, a strong culture of invention and risk-taking, and a relentless focus on customer relationships. HCL also takes pride in its many diversity, social responsibility, sustainability, and education initiatives. Through its worldwide network of R&D facilities and co-innovation labs, global delivery capabilities, and over 220,000+ Ideapreneurs across 52 countries, HCL delivers holistic services across industry verticals to leading enterprises, including 250 of the Fortune 500 and 650 of the Global 2000. How You’ll Grow At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best. Why Us We are one of the fastest-growing large tech companies in the world, with offices in 60+ countries across the globe and 222,000 employees. Our company is extremely diverse with 165 nationalities represented. We offer the opportunity to work with colleagues across the globe. We offer a virtual-first work environment, promoting a good work-life integration and real flexibility. We are invested in your growth, offering learning and career development opportunities at every level to help you find your own unique spark. We offer comprehensive benefits for all employees. We are a certified great place to work and a top employer in 17 countries, offering a positive work environment that values employee recognition and respect. We are looking for candidates with below skills. Location: Hyderabad Notice Period- Immediate Joiner- 15 days. Job Description: Role Profile: Skill required 1. Core Java 2. React JS 3. Spring or Spring boot framework 4. Awareness to Microservices 5. PCF/Azure/GCP Database (SQL Server/Oracle/MongoDB) 6. Testing (QA Selenium) 7. Agile Principles Build / Management Tools Jira/Jenkins/Sonar/GIT/Maven/etc 8. Basic Understanding of Design patterns Security (JWT/ OAUTH/Ping)
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City