Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
7 - 11 Lacs
gurugram
Work from Office
Overview: Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available , consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos , powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling , data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organization's data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities: Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data Integration: Integrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data Modeling: Develop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality Assurance: Implement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization: Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure Management: Manage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. Collaboration: Collaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives . Documentation and Best Practices: Document data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous Improvement: Stay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelor's degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink . Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus . Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI.
Posted 1 day ago
7.0 - 12.0 years
9 - 13 Lacs
hyderabad
Work from Office
Role: Full Stack Java Developer, G11 Location: Hyderabad About the role: The Channel Management team is responsible for creating multiple enterprise applications, in a scaled agile environment, to assist in the management of self-service devices (e.g. ATMs, retail self check-out machines, hospitality self check-in kiosks, etc). Our applications focus on the management of these endpoints, and cover: Inventory Management, Software Distribution, Device Management, automated Help Desk Workflows, and associated Business Intelligence. The successful applicant will contribute to the delivery of on premise/SaaS based Enterprise Web Applications to the Financial, Hospitality and Retail lines of business. We are looking for enthusiastic engineers, with differing levels of experience in developing full-stack Enterprise Java applications, to add momentum to the creation of NCR ATLEOSs next generation of Channel Management products. The successful candidates must be able to contribute to the implementation of user stories across the technology stack. To be successful we need people who: 7-12 years of experience in software design and development Enthusiastically follow technology trends, software engineering best practices and technologies Take pleasure in seeing smart, practical solutions put in front of customers Enjoy the challenge of solving complex problems Are open to constantly refresh and renew their skills. Responsibilities Contribute in all phases of the development lifecycle Development and review of changes with a focus for design, efficiency, and quality Work as part of a team as well as working autonomously Contribute to improvements to the software development process Support continuous improvement by investigating alternatives and technologies and presenting these for review Requirements BS/MS degree in Computer Science, Engineering or a related subject Experience developing in an Agile environment, using current engineering best practices Experience using - CSS3, HTML5, JavaScript Experience with at least one MVC Framework e.g. Angular.js, Angular, or React Experience using jsp, servlets, Java 8, Spring, Spring boot Framework, JPA/Hibernate, JUnit Experience using a RDBMS e.g., SQL( MS SQL Server, Oracle) Experience in with Git or SVN Experience with any one Cloud AZURE, AWS or GCP (IAAS, PAAS & SAAS) Knowledge on Docker and CI/CD pipelines Other Skills of Benefit Test-driven development Software security Performance testing About NCR ATLEOS Corporation: NCR ATLEOS is the global leader of self-service interactions, we are at the forefront of turning everyday transactions into exceptional experiences and making every day easier. NCR ATLEOSs footprint extends over a wide spectrum of areas: from point of sale terminals to ATMs, from financial and retail management systems to global payment systems. The industry is changing at an incredible rate with the arrival of new and disruptive technologies and startups, and we want you to be a part of it. This is an exciting time to get involved in the new products and solutions that NCR ATLEOS is developing for this rapidly changing world, applying the latest technologies and development practices. EEO Statement Integrated into our shared values is NCR ATLEOSs commitment to diversity. NCR ATLEOS is committed to being a globally inclusive company where all people are treated fairly, recognized for their individuality, promoted based on performance and encouraged to strive to reach their full potential. We believe in understanding and respecting differences among all people. NCR ATLEOS does not discriminate in employment based on sex, age, race, color, creed, religion, national origin, disability, sexual orientation, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. Every individual at NCR ATLEOS has an ongoing responsibility to respect and support a globally diverse environment. Statement to Third Party Agencies To ALL recruitment agencies: NCR ATLEOS only accepts resumes from agencies on the NCR ATLEOS preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR ATLEOS employees, or any NCR ATLEOS facility. NCR ATLEOS is not responsible for anyfees or charges associatedwith unsolicited resumes.
Posted 1 day ago
6.0 - 11.0 years
5 - 9 Lacs
bengaluru
Work from Office
Implement and maintain automated test suites for cloud-native applications and services Collaborate with senior engineers, QA, and product stakeholders to identify test areas with high ROI Develop reusable test components and contribute to automation infrastructure improvements Analyze test results, identify issues, and participate in defect resolution discussions Support CI/CD integration efforts by embedding automation into build and release workflows Stay current with industry best practices in automation, testing methodologies, and relevant tooling Qualifications Mandatory 6+ years experience in Quality and Automation role. 4+ years experience with Selenium, Cypress, Playwright, or other test automation frameworks. Experience in building Automation frameworks 2+ Experience working with SaaS solutions and microservices environment. Strong debugging skills. Strong experience with at least one of the three major hyper-scalers (AWS, Azure or GCP). Working knowledge of Kubernetes. 2+ Experience with Backend and API automation with Python or equivalent. Experience with CI/CD processes and tools. Nice to Have Experience with IaC tools such as Terraform, Ansible Awareness of observability and performance testing tools (e.g., Prometheus, Grafana, Jmeter etc.)
Posted 1 day ago
10.0 - 15.0 years
15 - 19 Lacs
gurugram
Work from Office
We are seeking a highly skilled DevOps Architect with expertise in OpenShift and Rancher Kubernetes Engine to design, implement, and optimize cloud-native infrastructure. The ideal candidate will have extensive experience in Kubernetes orchestration, containerization, CI/CD pipelines, and cloud automation to drive scalable and resilient deployments. Key Responsibilities: Architect and Implement: Design and deploy scalable, high-availability Kubernetes clusters using OpenShift and Rancher Kubernetes Engine (RKE) . Automation & Orchestration: Develop and manage infrastructure-as-code (IaC) solutions using Terraform, Helm, or Ansible. CI/CD Integration: Implement and optimize CI/CD pipelines using Jenkins, GitLab CI/CD, ArgoCD, or Tekton for automated deployment and testing. Security & Compliance: Enforce security best practices for Kubernetes clusters, RBAC policies, service mesh configurations, and container image scanning. Monitoring & Logging: Set up observability solutions using Prometheus, Grafana, ELK/EFK Stack, or OpenTelemetry for proactive monitoring and alerting. Multi-Cloud & Hybrid Cloud Deployments: Design hybrid cloud and multi-cloud strategies using AWS, Azure, GCP, or on-prem solutions integrated with OpenShift and Rancher. SRE & Performance Optimization: Implement SRE best practices for high availability, auto-scaling, and performance tuning of microservices architectures. Collaboration: Work closely with development, security, and operations teams to streamline DevOps processes and enable faster deployments. Disaster Recovery & Backup: Implement disaster recovery strategies , backup automation, and cluster failover solutions. Required Skills & Experience: Kubernetes & Containerization: Deep understanding of Kubernetes orchestration, OpenShift, and Rancher Kubernetes Engine (RKE2/RKE) . Containerization & Service Mesh: Experience with Docker, Istio, Linkerd, or Envoy. Infrastructure as Code (IaC): Hands-on expertise with Terraform, Helm, and Ansible. CI/CD Pipelines: Strong knowledge of Jenkins, GitOps (ArgoCD, FluxCD), and Tekton. Cloud Platforms: Experience with AWS, Azure, GCP, and on-premises Kubernetes clusters. Monitoring & Logging: Experience with Prometheus, Grafana, ELK/EFK Stack, OpenTelemetry. Security & Compliance: Kubernetes RBAC, Pod Security Policies, image scanning, and network policies. Scripting & Automation: Proficiency in Bash, Python, or Go for automation and scripting. Networking & Load Balancing: Expertise in Kubernetes networking, Ingress controllers (NGINX, Traefik), and service discovery. Backup & DR: Experience with Velero, Longhorn, or Kasten for Kubernetes backup and recovery.
Posted 1 day ago
4.0 - 9.0 years
9 - 13 Lacs
gurugram
Work from Office
We are looking for a seasoned Senior Full Stack Developer who excels at both backend and frontend development and can lead projects from concept to delivery. In this role, you will architect, develop, and maintain robust and scalable web applications using Java (with Spring Boot and Microservices) and React. You will work closely with cross-functional teams to ensure technical excellence, drive innovation, and mentor junior developers. Key Responsibilities: Full-Stack Development: Architect, develop, and maintain robust Java-based backend systems (using Spring Boot, Microservices) and dynamic, high-performance React applications. Mentorship: Guide and mentor junior developers, conducting code reviews and ensuring adherence to best practices in software design and development. Integration & API Development: Design and implement secure and efficient RESTful APIs for seamless front-end and back-end integration. Performance Optimization: Analyze, optimize, and troubleshoot application performance issues, ensuring smooth, scalable operations. CI/CD & DevOps: Collaborate with the DevOps team to implement CI/CD pipelines, containerization (Docker, Kubernetes), and automated testing strategies. Code Quality & Testing: Ensure high code quality through robust unit, integration, and end-to-end testing practices. Advocate for test-driven development (TDD) where appropriate. Continuous Learning: Stay updated with the latest trends and technologies in Java, React, and cloud platforms to drive continuous improvement and innovation. Required Skills: Java Expertise: Extensive experience with Java, Spring Boot, and Microservices architecture. Frontend Proficiency: Advanced skills in React.js, along with a strong command of JavaScript, TypeScript, HTML5, and CSS3. API Development: Proficiency in designing and consuming RESTful APIs and familiarity with WebSocket implementations. Database Knowledge: Experience with both SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). DevOps Familiarity: Hands-on experience with CI/CD tools, Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP). Testing & QA: Strong knowledge of unit testing frameworks (JUnit, Jest, Mocha, Cypress) and automated testing practices. Agile Methodologies: Proven experience working in Agile development environments, utilizing version control systems (Git, GitHub, Bitbucket). Problem Solving: Exceptional analytical and debugging skills with a proactive attitude toward addressing challenges. Preferred Skills: Architectural Leadership: Experience in designing system architectures for large-scale applications. Microfrontend Architecture: Familiarity with modern frontend architectural patterns. GraphQL Experience: Knowledge of GraphQL for building flexible and efficient APIs. Mentorship & Leadership: Prior experience in leading teams or managing projects in a senior role. Communication: Excellent verbal and written communication skills with the ability to articulate complex technical concepts to non-technical stakeholders.
Posted 1 day ago
5.0 - 8.0 years
9 - 10 Lacs
chennai
Work from Office
Generate and analyze critical part list from CMMS3 and liaison with suppliers on daily basis for shipments. Arrange airfreight of critical parts for customer plants as per the procedures in a timely manner, analyze and allocate cost responsibility for each airfreight and get concurrence from relevant party. Use EXTRACT system for getting authorization of premium freight cost, tracking and updating the agreed cost. Ensure supplier/part resourcing are done effectively and updated in the system after consultation with the customer, supplier and purchasing. Highlight any potential production risk to customer plants and involve other departments as required. Check for various options available for assistance from other alternative material sources and co-ordinate accordingly. Communicate issues on common parts/commodities to avoid late identification of problems to all customer plants. Co-ordinate contingency plans, shut downs, strikes etc with suppliers, EDCs and customers to develop alternate plans to tackle potential issues. Address and follow up on long terms supplier problems. Assist customer plants in verification and communication (Alert process, debit notes, etc) Perform release analysis to check for schedule variations & new parts and take up with customers for abnormal variations, wrong releases, packaging issues etc to avoid over shipments, airfreights and obsolescence due to release issues. Inventory management by analyzing under shipments and over shipments on regular basis, identification of the root causes and resolution. Monitor the carrier efficiencies by checking for transit delays and analyzing the root cause in consultation with the Logistics providers. (LLP). Improving the supplier delivery performance and effective use of SUPER-G system for recording and resolution from suppliers for delivery performance and response. Ensure, educate and support the suppliers to use CMMS3, DDL, create advance shipping notices (ASN), check the DCI regularly and input the shipment confirmations in CMMS. Monitor and assist suppliers on data integrity issues to avoid any criticality due to data discrepancy. Generate MIS reports as and when required. Supporting the customer plants in case of claims by facilitating the process with the supplier. Flexible to work on Asia / Europe / North America work timing depends on requirements. Participate and contribute in Innovation / TVM activities to realize cost & process efficiency. Education Qualification : BE Number of Years of Experience : Minimum 5 - 8 years. Professional Exposure : Exposure in Material flow, Manufacturing, Supply Chain / Logisitics ( added advantage ) / Procurement Special Knowledge or skill required : Excellent oral and written communication (in English), Knowledge in corporate MRP systems, good Analytical Skills. IT Exposure : Operation skills on Alteryx, Power BI / QlikSense / GCP will be an added advantage. Knowledge of MS Office is a must.
Posted 1 day ago
5.0 - 10.0 years
7 - 8 Lacs
hyderabad
Work from Office
- Title - Safety & PV Spec I - Experience - 1-2.5 years of PV, experience - Job Location -Office based at Gurgaon / HYD Hybrid - Qualification Education B.Pharm / M.Pharm / BDS / BMS / MBBS (No BSc / MSc) - ICSR Case processing -Spontaneous is mandatory - English communications skill is important - Medical Knowledge - Terminology - Enter information into PVG quality and tracking systems for receipt and tracking ICSR as required. - Assists in the processing of ICSRs according to Standard Operating Procedures (SOPs) and project/program-specific safety plans as required. - Triages ICSRs, evaluates ICSR data for completeness, accuracy, and regulatory report ability. - Enters data into safety database. - Codes events, medical history, concomitant medications, and tests. - Compiles complete narrative summaries. - Identifies information to be queried and follows up until information is obtained and queries are satisfactorily resolved. - Assists in the generation of timely, consistent, and accurate reporting of expedited reports in accordance with applicable regulatory requirements. - Maintains safety tracking for assigned activities. - Performs literature screening and review for safety, drug coding, maintenance of drug dictionary, MedDRA coding as required. - Validation and Submission of xEVMPD product records, including appropriate coding of indication terms using MedDRA. - Manual recoding of un-recoded product and substance terms arises from ICSRs. - Identification and management of duplicate ICSRs. - Activities related to SPOR / IDMP. - Quality review of ICSRs. - Ensures all relevant documents are submitted to the Trial Master File (TMF) as per company SOP/Sponsor requirements for clinical trials and the Pharmacovigilance System Master File for post-marketing programs as appropriate. - Maintains understanding and compliance with SOPs, Work Instructions (WIs), global drug/biologic/device regulations, GCP, ICH guidelines, GVP, project/program plans and the drug development process. - Fosters constructive and professional working relationships with all project team members, internal and external. - Participates in audits as required/appropriate. - Applies safety reporting regulatory intelligence maintained by Syneos Health to all safety reporting activities.
Posted 1 day ago
2.0 - 3.0 years
30 - 35 Lacs
bengaluru
Work from Office
Lead end-to-end development of ML/DL models from data preprocessing to model deployment. Design and implement advanced solutions using computer vision, NLP, and generative AI models (e.g., Transformers, GANs). Apply and experiment with agentic AI approaches to build autonomous decision-making systems. Collaborate with engineering, product, and business stakeholders to align AI solutions with business outcomes. Work with large-scale datasets and implement MLOps pipelines for automated training, evaluation, and deployment on cloud (Azure preferred). Stay up to date with the latest AI research and apply state-of-the-art techniques in production systems. Mentor junior data scientists and contribute to AI knowledge sharing across teams. Required Skills & Qualifications: 8+ years of industry experience in data science, ML, or AI, with demonstrated project ownership. Proficiency in Python and frameworks like TensorFlow, PyTorch, OpenCV, scikit-learn, etc. Strong background in generative models (e.g., LLMs, GANs, VAEs) and foundational ML techniques. Expertise in computer vision, including object detection, image classification, and segmentation. Experience in implementing agentic AI systems using modern orchestration tools and frameworks. Hands-on experience with Azure AI services (e.g., Azure ML, Cognitive Services, Azure OpenAI, Azure Blob for data handling). Solid understanding of MLOps practices (CI/CD, version control, model registry, monitoring). Strong analytical, problem-solving, and communication skills. Preferred Qualifications: Master s in Computer Science, Data Science, or a related field. Experience working in cloud-native AI environments (Azure, AWS, or GCP). Exposure to vector search engines, knowledge graphs, and retrieval-augmented generation (RAG). Publications, patents, or active participation in AI communities or open-source contributions.
Posted 1 day ago
4.0 - 9.0 years
11 - 16 Lacs
chennai
Work from Office
We are looking for a Salesforce Developer with deep knowledge of AI-powered Salesforce features. This role will focus on building scalable, AI-enabled applications and integrations that leverage Einstein AI and Agentforce. Key Responsibilities: Develop, test, and deploy custom solutions using Apex, Lightning Web Components, and Salesforce APIs. Integrate AI features into custom applications, including Einstein AI, Prediction Builder, Einstein Discovery and Prompt Builder etc. Collaborate with data teams to prepare, cleanse, and model data for AI predictions. Create custom AI workflows and embed AI-driven recommendations into business processes. Work closely with business leaders to design scalable AI solutions. Ensure AI solutions comply with security, governance, and compliance requirements. Research and implement new AI capabilities from Salesforce releases. Qualifications: 4+ years as a Salesforce Developer. Proven experience in core Salesforce core functionality, platform features, developing AI-driven Salesforce features. Proficiency in Apex, LWC, SOQL, and REST/SOAP APIs. Understanding of AI model deployment, data pipelines, and Copilot integration. Experience integrating Salesforce with external AI/ML services (AWS, Azure, GCP) a plus. Strong problem-solving and analytical skills.
Posted 1 day ago
3.0 - 4.0 years
4 - 8 Lacs
hyderabad
Work from Office
Design, develop, and optimize backend systems and data pipelines using Python and modern data engineering tools. Architect and implement scalable, reliable, and secure systems capable of handling large volumes of data and traffic. Work with AWS, and Azure to deploy, manage, and optimize cloud-based solutions. Collaborate with stakeholders and clients to gather requirements, define scope, and break down tasks into actionable deliverables. Translate complex business needs into technical solutions and ensure alignment with overall product goals. Lead and mentor junior team members when needed, fostering best practices in code quality and architecture. Participate in Agile ceremonies, ensuring smooth sprint planning, backlog grooming, and delivery tracking. Troubleshoot, debug, and resolve performance bottlenecks across systems. Required Skills & Experience: 3 4+ years of professional experience in backend development and data engineering. Strong Python programming skills, with experience in frameworks like FastAPI, Django, or Flask . Solid understanding of data pipelines , ETL processes, and database systems (SQL and NoSQL). Proven experience in designing and implementing scalable architectures . Hands-on experience with cloud services : AWS, Azure, and GCP (at least two in depth). Hands-on experience with cloud services : AWS, Azure, and GCP (at least two in depth). Knowledge of containerization (Docker, Kubernetes) and CI/CD workflows. Strong communication skills to interact with stakeholders, clients, and cross-functional teams. Experience in requirement gathering , scope definition, and task breakdown. Familiarity with Agile/Scrum methodologies. Nice-to-Have Experience with big data frameworks (Spark, Beam, etc.). Exposure to microservices architecture. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation.
Posted 1 day ago
1.0 - 6.0 years
6 - 9 Lacs
thane
Work from Office
Collect, clean, and prepare data for machine learning using Pandas, NumPy, etc. Build and experiment with Large Language Models (LLMs) using frameworks like LangChain and LangGraph. Develop prototypes and components of AI-powered solutions. Work on RAG (Retrieval-Augmented Generation) and Agentic AI workflows. Assist in building and maintaining data pipelines for model training and inference. Collaborate with cross-functional teams to integrate AI features into production. Must Have Strong knowledge of Python and libraries like Pandas, NumPy Experience with one or more Python frameworks (Django, Flask, or FastAPI) Understanding of Machine Learning algorithms and libraries (scikit-learn, TensorFlow, Keras) Exposure to LLMs, RAG, Vector Databases, or Agentic AI concepts Experience with REST APIs Visualization tools like Matplotlib, Seaborn Version control using Git, working with Jupyter Notebooks Good to Have Familiarity with AI-focused IDEs like Cursor or WindSurf Exposure to open-source AI platforms such as Hugging Face, Ollama Basic understanding of cloud services (AWS, GCP)
Posted 1 day ago
4.0 - 8.0 years
5 - 9 Lacs
pune
Work from Office
Qualification B.E. (IT/Computer) or similar Experience 4 - 8 years Skills Python, Django, GenAI Location Pune, Maharashtra Job Description We are looking for a talented Python Developer with experience in GenAI (Generative Artificial Intelligence) to join our team. The ideal candidate should have a strong background in Python development and a deep understanding of artificial intelligence concepts, particularly in generative models. As a Python developer with GenAI experience, you will be crucial in designing, developing, and deploying AI-driven solutions that leverage generative techniques to create novel and innovative outputs. Required Skills Strong experience in development & designing using Python 5+ years of experience in Python, advanced skills like Flask/Django 1+ years of experience working on GCP or AWS Experience working with GenAI or similar AI technologies is highly desirable. Working knowledge and an in-depth understanding of Docker and building CI/CD pipelines Hands-on experience in unit testing Good to have Passion for writing simple, clean, and efficient code Should be a fast learner and have excellent problem-solving capabilities Should have excellent written and verbal communication skills Experience in working with large-scale distributed systems is a plus Should have strong analytical and problem-solving skills Should be able to design and build components for the automation platform independently Should assist in the maintenance of the tools and troubleshooting the issues Why should you join Opcito? We are a dynamic company that believes in designing transformation solutions for our customers with our ability to unify quality, reliability, and cost-effectiveness at any scale. Our core work culture focuses on adding material value to client products by leveraging best practices in DevOps, like continuous integration, continuous delivery, and automation, coupled with disruptive technologies like cloud, containers, serverless computing, and microservice-based architectures. Here are some of the perks of working with Opcito: Outstanding career development and learning opportunities Competitive compensation depending on experience and skill Friendly team and enjoyable work environment Flexible working schedule Corporate and social events Awards and Recognitions Great Place To Work certified 2021-22, 2022-23, 2023-24 Indias Great Mid-Size Workplaces for 2022 by GPTW Indias Best Workplaces in IT & IT-BPM 2022 - Top 50 by GPTW Indias Best Workplaces Building a Culture of Innovation by All - Mid-size 2023 by GPTW Indias Great Mid-Size Workplaces 2023 ranked 31 by GPTW Indias Best Workplaces for Millennials 2023 by GPTW Top Developers India 2022 by Clutch ISO/IEC 27001-2013 by DNV-GL
Posted 1 day ago
2.0 - 4.0 years
8 - 12 Lacs
mumbai
Work from Office
About the Role: Were seeking a hands-on AI Automation Engineer to spearhead AI adoption across our organization. This isnt about building models from scratch its about solving real business problems with practical AI solutions.Youll collaborate with product, operations, marketing, and engineering teams to identify automation opportunities, build AI-powered tools, and streamline workflows using LLMs, APIs, and automation platforms. If you thrive on experimentation, rapid prototyping, and seeing your solutions make immediate impact this role is for you. What Youll Do Build & Deploy AI Solutions Design and implement LLM-powered tools, copilots, and intelligent agents using OpenAI APIs, LangChain, and vector databases Create custom chatbots and knowledge assistants using internal data and RAG architectures Develop AI-driven anomaly detection tools Automate & Orchestrate Use platforms like n8n, Zapier, or Make to build scalable workflow automations Integrate disparate systems through APIs, webhooks, and custom connectors Design multi-step AI agent workflows for complex business processes Collaborate & Scale Partner with teams to identify pain points and build targeted AI solutions Document processes and create self-service tools for non-technical users Establish best practices and frameworks for AI tool development Measure impact through usage analytics and feedback loops What Were Looking For Must-Haves 2 4 years of experience in software development, automation engineering, or AI implementation Strong proficiency in Python and JavaScript/Node.js Hands-on experience with AI platforms (OpenAI, Claude, Gemini) and prompt engineering Working knowledge of LangChain, vector databases (Pinecone, Chroma, Weaviate), and RAG patterns Experience with n8n or similar workflow automation tools (Zapier, Make, Retool) Understanding of API design, webhooks, and system integrations Nice-to-Haves Experience building internal tools, AI copilots, or business automation solutions Knowledge of fraud detection, anomaly detection, or ML monitoring systems Familiarity with affiliate marketing, e-commerce, or digital marketing workflows Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker) Background in data engineering or working with large datasets Success Metrics First 6 Months: Launch 3+ high-impact AI tools or automations in production Demonstrate measurable time savings or efficiency gains across 2+ departments Establish reusable AI workflows adopted beyond initial implementation scope Build foundation for organization-wide AI adoption and scaling Ongoing: Maintain 90%+ uptime for critical AI automation systems Achieve positive ROI on AI implementations through cost savings or revenue generation Enable non-technical teams to independently leverage AI tools youve built Training of existing team members Sample Projects You Might Work On AI-Powered Fraud Detection : Build real-time anomaly detection systems using transaction data and behavioral patterns Internal Knowledge Assistant : Create ChatGPT-like solutions using company data, documentation, and knowledge bases Intelligent Process Automation : Design multi-agent workflows that handle complex business processes end-to-end Customer Support Copilot : Develop AI assistants that help support teams with intelligent response suggestions and case routing Marketing Automation : Build AI-driven content generation and campaign optimization tools Why Join Us Pioneer Role : Be our first AI-focused engineer and shape how we approach intelligent automation Immediate Impact : Your solutions will directly improve how teams work and scale operations Creative Freedom : Experiment with cutting-edge AI tools and techniques to solve real business problems Cross-Functional Collaboration : Work closely with product, marketing, and operations teams to ship meaningful solutions Growth Opportunity : Help establish AI best practices and potentially lead a team as we scale What We Offer Competitive salary. Budget for AI tools, courses, and conference attendance Opportunity to work with latest AI technologies and platforms Direct access to leadership and input on technical strategy AI Happy hour every Alternate Friday - Team members work together exploring advancements in AI
Posted 1 day ago
8.0 - 10.0 years
30 - 35 Lacs
kolkata, mumbai, new delhi
Work from Office
Architect end-to-end solutions for time tracking, leave management, and scheduling systems Lead cross-functional teams in implementing scalable and secure workforce platforms Integrate time and absence modules with HRMS, payroll, and ERP systems Define technical standards, governance models, and best practices Collaborate with stakeholders to gather requirements and translate them into technical designs Ensure compliance with labor laws, data privacy regulations, and audit requirements Provide technical leadership and mentorship to implementation teams Conduct system assessments and recommend improvements for performance and usability Required Skills & Expertise Hands-on experience with platforms like Workday, CEIPAL, Kronos, ADP, SAP SuccessFactors Strong knowledge of absence policies, accruals, and compliance frameworks Proficiency in API integration, middleware, and cloud deployment (AWS, Azure, GCP) Familiarity with workflow automation tools and reporting dashboards Excellent communication and stakeholder engagement skills Workday or CEIPAL certification preferred
Posted 1 day ago
2.0 - 5.0 years
4 - 5 Lacs
kolkata, mumbai, new delhi
Work from Office
The Pharmacovigilance (PV) Scientist supports the activities in the Benefit Risk group by ensuring that day-to-day operational activities are completed. The PV Safety Scientist is expected to be professional and diligent and liaise with the director and management group members within the team on any issues, as well as coordinate the work with the senior PV Scientists. The PV Safety Scientist is also expected to lead by example and ensures quality standards are upheld within the company. Essential functions Authoring of Aggregate reports (PSURs/PBRERs, PADERs/Annual Reports/ ACO/ DSUR) for submission to local and other Health Authorities. Authoring of Risk Management Plans (RMPs) as a part of regular Pharmacovigilance activities. Authoring of Signal Management Reports. Performing literature search and validity check for the aggregate reports. Reconciliation of relevant process trackers. Extraction and validation of data (RSI, Sales, previous reports, RMP, signals). Generation of Line Listings (LL) from safety database Providing reliable support for high priority Ad-hoc activities. Necessary Skills and abilities. Analytical and problem-solving skills. Sound organizational skills Able to work within a team in an open and professional manner Excellent attention to detail and focus on quality Understanding of ICH-GCP, FDA, EMA and other relevant Global regulations related to PV Demonstrable ability to analyze and quantify large volumes of data in a concise and scientific manner, in keeping with regulatory deadlines. Awareness of global culture and willingness to work in a matrix environment Knowledge in other pharmacovigilance processes, ability to author/update SOPs or WIs, to identify and author deviations/CAPAs." Ensuring that deliverables to the clients comply with the relevant regulatory requirements and are sent to the client within agreed timelines. Capability to make concise, accurate and relevant synopses of medical text and data, and the ability to write unambiguous medical text Computer proficiency, IT skills, expertise and an ability to work with web based applications, and familiarity with the Windows Operating system and the MS Office suite (Word/Excel/Power Point) Educational requirements Bachelor/master s degree in pharmacy, Nursing, Life Science, or other health-related field, or equivalent qualification/work experience Experience Requirements 2+ years experience in Pharmacovigilance with a focus on medical writing and/or literature search and/or signal detection Experiences in contributing to the compilation of metrics and participating in discussions about quality internally and with Clients Experience in literature screening activities and/or experiences in authoring and reviewing aggregate reports
Posted 1 day ago
3.0 - 5.0 years
5 - 8 Lacs
chennai
Work from Office
" PLEASE READ THE JOB DESTCRIPTION AND APPLY" Data Engineer Job Description Position Overview Yesterday is history, tomorrow is a mystery, but today is a gift. That's why we call it the present. - Master Oogway Join CustomerLabs' dynamic data team as a Data Engineer and play a pivotal role in transforming raw marketing data into actionable insights that power our digital marketing platform. As a key member of our data infrastructure team, you will design, develop, and maintain robust data pipelines, data warehouses, and analytics platforms that serve as the backbone of our digital marketing product development. Sometimes the hardest choices require the strongest wills. - Thanos (but we promise, our data decisions are much easier! ) In this role, you will collaborate with cross-functional teams including Data Scientists, Product Managers, and Marketing Technology specialists to ensure seamless data flow from various marketing channels, ad platforms, and customer touchpoints to our analytics dashboards and reporting systems. You'll be responsible for building scalable, reliable, and efficient data solutions that can handle high-volume marketing data processing and real-time campaign analytics. What You'll Do: - Design and implement enterprise-grade data pipelines for marketing data ingestion and processing - Build and optimize data warehouses and data lakes to support digital marketing analytics - Ensure data quality, security, and compliance across all marketing data systems - Create data models and schemas that support marketing attribution, customer journey analysis, and campaign performance tracking - Develop monitoring and alerting systems to maintain data pipeline reliability for critical marketing operations - Collaborate with product teams to understand digital marketing requirements and translate them into technical solutions Why This Role Matters: I can do this all day. - Captain America (and you'll want to, because this role is that rewarding!) You'll be the backbone behind the data infrastructure that powers CustomerLabs' digital marketing platform, making marketers' lives easier and better. Your work directly translates to smarter automation, clearer insights, and more successful campaigns - helping marketers focus on what they do best while we handle the complex data heavy lifting. Sometimes you gotta run before you can walk. - Iron Man (and sometimes you gotta build the data pipeline before you can analyze the data! ) Our Philosophy: We believe in the power of data to transform lives, just like the Dragon Warrior transformed the Valley of Peace. Every line of code you write, every pipeline you build, and every insight you enable has the potential to change how marketers work and succeed. We're not just building data systems - we're building the future of digital marketing, one insight at a time. Your story may not have such a happy beginning, but that doesn't make you who you are. It is the rest of your story, who you choose to be. - Soothsayer What Makes You Special: We're looking for someone who embodies the spirit of both Captain America's unwavering dedication and Iron Man's innovative genius. You'll need the patience to build robust systems (like Cap's shield ) and the creativity to solve complex problems (like Tony's suit). Most importantly, you'll have the heart to make a real difference in marketers' lives. Inner peace... Inner peace... Inner peace... - Po (because we know data engineering can be challenging, but we've got your back! ) Key Responsibilities Data Pipeline Development - Design, build, and maintain robust, scalable data pipelines and ETL/ELT processes - Develop data ingestion frameworks to collect data from various sources (databases, APIs, files, streaming sources) - Implement data transformation and cleaning processes to ensure data quality and consistency - Optimize data pipeline performance and reliability Data Infrastructure Management - Design and implement data warehouse architectures - Manage and optimize database systems (SQL and NoSQL) - Implement data lake solutions and data governance frameworks - Ensure data security, privacy, and compliance with regulatory requirements Data Modeling and Architecture - Design and implement data models for analytics and reporting - Create and maintain data dictionaries and documentation - Develop data schemas and database structures - Implement data versioning and lineage tracking Data Quality, Security, and Compliance - Ensure data quality, integrity, and consistency across all marketing data systems - Implement and monitor data security measures to protect sensitive information - Ensure privacy and compliance with regulatory requirements (e.g., GDPR, CCPA) - Develop and enforce data governance policies and best practices Collaboration and Support - Work closely with Data Scientists, Analysts, and Business stakeholders - Provide technical support for data-related issues and queries Monitoring and Maintenance - Implement monitoring and alerting systems for data pipelines - Perform regular maintenance and optimization of data systems - Troubleshoot and resolve data pipeline issues - Conduct performance tuning and capacity planning Required Qualifications Experience - 2+ years of experience in data engineering or related roles - Proven experience with ETL/ELT pipeline development - Experience with cloud data platform (GCP) - Experience with big data technologies Technical Skills - Programming Languages : Python, SQL, Golang (preferred) - Databases: PostgreSQL, MySQL, Redis - Big Data Tools: Apache Spark, Apache Kafka, Apache Airflow, DBT, Dataform - Cloud Platforms: GCP (BigQuery, Dataflow, Cloud run, Cloud SQL, Cloud Storage, Pub/Sub, App Engine, Compute Engine etc.) - Data Warehousing: Google BigQuery - Data Visualization: Superset, Looker, Metabase, Tableau - Version Control: Git, GitHub - Containerization: Docker Soft Skills - Strong problem-solving and analytical thinking - Excellent communication and collaboration skills - Ability to work independently and in team environments - Strong attention to detail and data quality - Continuous learning mindset Preferred Qualifications Additional Experience - Experience with real-time data processing and streaming - Knowledge of machine learning pipelines and MLOps - Experience with data governance and data catalog tools - Familiarity with business intelligence tools (Tableau, Power BI, Looker, etc.) - Experience using AI-powered tools (such as Cursor, Claude, Copilot, ChatGPT, Gemini, etc.) to accelerate coding, automate tasks, or assist in system design ( We belive run with machine, not against machine ) Interview Process 1. Initial Screening: Phone/video call with HR 2. Technical Interview: Deep dive into data engineering concepts 3. Final Interview: Discussion with senior leadership Note: This job description is intended to provide a general overview of the position and may be modified based on organizational needs and candidate qualifications. Our Team Culture We are Groot. - We work together, we grow together, we succeed together. We believe in: - Innovation First - Like Iron Man, we're always pushing the boundaries of what's possible - Team Over Individual - Like the Avengers, we're stronger together than apart - Continuous Learning - Like Po learning Kung Fu, we're always evolving and improving - Making a Difference - Like Captain America, we fight for what's right (in this case, better marketing!) Growth Journey There is no charge for awesomeness... or attractiveness. - Po Your journey with us will be like Po's transformation from noodle maker to Dragon Warrior: - Level 1 : Master the basics of our data infrastructure - Level 2: Build and optimize data pipelines - Level 3 : Lead complex data projects and mentor others - Level 4: Become a data engineering legend (with your own theme music! ) What We Promise I am Iron Man. - We promise you'll feel like a superhero every day! - Work that matters - Every pipeline you build helps real marketers succeed - Growth opportunities - Learn new technologies and advance your career - Supportive team - We've got your back, just like the Avengers - Work-life balance - Because even superheroes need rest! Apply : https://customerlabs.freshteam.com/jobs
Posted 1 day ago
2.0 - 8.0 years
8 - 9 Lacs
bengaluru
Work from Office
Aris Global Software Pvt. Ltd. is looking for Cloud Engineer - I to join our dynamic team and embark on a rewarding career journeyA Cloud Engineer is responsible for designing, implementing, and maintaining an organization's cloud computing infrastructure. The following are some key responsibilities for a Cloud Engineer:1.Design and implement cloud computing solutions using technologies such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).2.Configure and maintain cloud infrastructure, including virtual machines, storage systems, and network settings.3.Monitor and optimize cloud performance, including resources utilization and cost management.4.Implement and maintain security and compliance measures to ensure the confidentiality, integrity, and availability of data and systems.5.Troubleshoot and resolve cloud infrastructure issues, and perform root cause analysis to prevent future incidents.6.Stay up-to-date with the latest cloud computing technologies, trends, and best practices.Requirements:1.Strong knowledge of cloud computing technologies, such as virtualization, containers, and automation.2.Experience with scripting languages such as Python, Bash, or PowerShell.3.Good understanding of network protocols, security best practices, and compliance standards.4.Strong problem-solving and analytical skills.5.Excellent communication and interpersonal skills.
Posted 1 day ago
7.0 - 11.0 years
12 - 17 Lacs
kolkata, mumbai, new delhi
Work from Office
Responsibilities: Lead end-to-end architecture for enterprise applications and platforms Define and document solution architectures, including integration and data flow Collaborate with stakeholders to gather requirements and propose technical strategies Guide development teams through implementation, ensuring adherence to design principles Evaluate emerging technologies and recommend adoption strategies Ensure compliance with security, scalability, and performance standards Conduct architecture reviews and provide mentorship to junior architects and engineers Drive DevOps, CI/CD, and cloud-native practices across projects Skills: Proficiency in cloud platforms (AWS, Azure, GCP) Expertise in microservices, API design, and containerization (Docker, Kubernetes) Strong grasp of DevOps tools (Terraform, Jenkins, GitLab CI/CD) Experience with security architecture and governance models Excellent communication and stakeholder management skills
Posted 1 day ago
2.0 - 3.0 years
3 - 4 Lacs
indore, pune, bengaluru
Work from Office
Responsibilities Design, develop, and deploy end-to-end machine learning pipelines in cloud-native environments, ensuring scalability and reliability. Collaborate with data scientists to productionalize ML models, transitioning from prototype to enterprise-ready solutions. Optimize cloud-based data architectures and ML systems for high performance and cost efficiency (AWS, Azure, GCP). Integrate ML models into existing and new system architectures, designing robust APIs for seamless model consumption. Implement MLOps and LLMOps best practices, including CI/CD for ML models, monitoring, and automated retraining workflows. Continuously assess and improve ML system performance, ensuring high availability and minimal downtime. Stay ahead of AI and cloud trends, collaborating with cloud architects to leverage cutting-edge technologies Qualifications 4+ years of experience in cloud-native ML engineering, deploying and maintaining AI models at scale. Hands-on experience with AI cloud platforms (Azure ML, Google AI Platform, AWS SageMaker) and cloud-native services. Strong programming skills in Python and SQL, with expertise in cloud-native tools like Kubernetes and Docker. Experience building automated ML pipelines, including data preprocessing, model deployment, and monitoring. Proficiency in Linux environments and cloud infrastructure management. Experience operationalizing GenAI applications or AI assistants is a plus. Strong problem-solving, organizational, and communication skills. Location: Bengaluru,Indore,Pune
Posted 1 day ago
0.0 - 1.0 years
8 - 10 Lacs
hyderabad
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 1 day ago
3.0 - 6.0 years
6 - 8 Lacs
noida
Work from Office
3+ years experienced engineer who has worked on GCP environment and its relevant tools/services. (Big Query, Data Proc, Data flow, Cloud Storage, Terraform, Tekton , Cloudrun , Cloud scheduler, Astronomer/Airflow, Pub/sub, Kafka, Cloud Spanner streaming etc) 1 or 2 + years of strong experience in Python development (Object oriented/Functional Programming, Pandas, Pyspark etc) 1 or 2 + years of strong experience in SQL Language (CTEs, Window functions, Aggregate functions etc)
Posted 1 day ago
4.0 - 8.0 years
18 - 20 Lacs
chennai
Work from Office
Roles and Responsibilities Responsible for programming and testing of cloud applications Integration of user-facing elements developed by a front-end developers with server side logic Optimization of the application for maximum speed and scalability Design and implementation of data storage solutions Writing reusable, testable, and efficient code Design, Code, test, debug, and document software according to the functional requirements Participate as a team member in fully agile Scrum deliveries Provide Low Level Design Document for the components Work collaboratively in Agile/Scrum team environment Test driven development based on unit tests Preferred Skills Good to have knowledge of API designing using Swagger Hub Good to have knowledge of SignalR API for web functionality implementation and data broadcasting Good to have knowledge on cloud and CI/CD. Knowledge of continuous integration Excellent teamwork and communication abilities Excellent organizational and time management abilities Effective scrum master experience Requirements Skill Requirements: Bachelor/Master of Engineering or equivalent in Computers/Electronics and Communication with 5-7 yrs experience. Hands-on Experience in web application development using Angular. Hands-on experience in C#, ASP.NET development. Dev level Cloud application Certification is recommended. Proficiency in designing and implementing microservices-based applications, with a strong understanding of micro-services design principles, patterns, and best practices. Experience to work in multiple cloud environments - Azure, AWS web-services and GCP. Experience in developing and consuming web services GRPC Strong knowledge of RESTful APIs, HTTP protocols, JSON, XML and micro services using serverless cloud technologies. Integration of data storage solutions like databases, key-value stores, blob stores User authentication and authorization between multiple systems, servers, and environments Management of hosting environment, deployment of update packages Excellent analytical and problem-solving abilities Strong understanding of object-oriented programming Basic understanding of front-end technologies, such as JavaScript, TypeScript, HTML5, and CSS Strong unit test and debugging skills Proficient understanding of code versioning tools such as Git, SVN Hands-on experience with PostgreSQL Database. Knowledge on Azure IOT, MQTT, Apache Kafka, Kubernetes, Docker, is a plus. Experience with version control systems such as Git & SVN. Good understanding of Agile based software development & Software delivery process. Experience in Requirements Managements tools like Polarion [preferable] or any other requirement management system Excellent communication and collaboration abilities, with the capacity to work effectively in cross- functional teams.
Posted 1 day ago
4.0 - 6.0 years
10 - 14 Lacs
bengaluru
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 1 day ago
5.0 - 8.0 years
5 - 8 Lacs
hyderabad, pune
Work from Office
Candidate should be proficient in handling L3 level support candidate should possess a minimum of 8 years of experience Candidate must be available to join at the earliest opportunity. We are seeking a Cloud DevOps Engineer with hands-on experience in cloud-native environments and preferably a strong working knowledge of Wiz Cloud Native Application Protection. You will be responsible for reliable delivery of Wiz capabilities and supporting Wiz integration with HSBC cloud (AWS, GCP, Azure, AliCloud) and containers infrastructure. This is a cross-functional role that bridges DevOps and Security, enabling secure and compliant cloud deployment and increased visibility of cloud and containers security posture. Key Responsibilities Wiz management plane: delivery and management to automate day-to-day operations user onboarding, wiz policy and settings updates, migration between Wiz tenants, automate Wiz integration with cloud and containers platforms. Wiz Integration & Management: onboard new cloud and containers accounts into Wiz, recipes for integrating Wiz with CI/CD pipeline, containers registry, ticketing systems. Implement custom Wiz reports for different stakeholders by leveraging Wiz API and Graph using GraphLQ, Python etc. Build and maintain infrastructure-as-code for Wiz deployments and configurations. Contribute to security runbooks, incident response, and policy-as-code documentation. Responsible for updating and patching Wiz management plane software and infrastructure. Support integration efforts with downstream reporting tools by provisioning and maintaining service accounts and APIs catalogue needed. Key requirements 1.Experience in building management plane for IT systems on cloud platforms to monitor system health using gitOps and CICD pipelines, preferably using Google technology, e.g. CloudRun 2.Experience in process automation for making changes to Wiz platform e.g. create new roles, updating roles, automate users onboarding 3.Strong scripting or coding skills (Pythons, Go, etc..) to implement custom report and API in Wiz for vulnerability scanning reporting, configuration baseline, and runtime security 4.Proficiency with at least one major cloud provider (GCP, AWS) 5.Infrastructure-as-Code (iac) experience with terraform, Helm 6.Experience in CICD tools (GitHub Actions, Jenkins, CloudRun, AWS system manager) 7.Experience in debugging and troubleshooting for Wiz related issues Mandatory Skills: Google Cloud Admin.Experience: 5-8 Years.
Posted 1 day ago
5.0 - 8.0 years
8 - 18 Lacs
pune, bengaluru, mumbai (all areas)
Hybrid
Experience 5 to 8 yrs NP Immediate to 30 days only Location: Mumbai/ Bangalore/ Pune Mandatory Skills: Terraform, GCP Desired Skills : Terraform: Infrastructure-as-Code for GCP provisioning. Google Cloud Platform (GCP): Compute Engine, Cloud Storage, IAM, VPC, Cloud Functions. Linux/Unix Administration: Shell scripting and system troubleshooting. CI/CD Tools: Jenkins, GitHub Actions, GitLab CI. Version Control: Git (GitHub/GitLab/Bitbucket) Containerization: Docker and Kubernetes (GKE preferred). Job Description 5-8 years strong experience in design, develop, and maintain Terraform scripts to provision and manage GCP infrastructure. Manage GCP IAM roles and policies, ensuring secure access and compliance. Automate deployment workflows for healthcare applications, ensuring scalability, reliability, and security. • Collaborate with development, QA, and product teams to integrate and optimize CI/CD pipelines. • Troubleshoot deployment issues and provide root cause analysis and resolution. • Maintain and improve infrastructure documentation, including architecture diagrams and runbooks. • Stay updated with the latest DevOps tools and GCP features, and recommend improvements. • Participate in code reviews, sprint planning, and release management activities • Excellent written and oral communication
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City