Jobs
Interviews

44412 Gcp Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

13.0 - 17.0 years

32 - 35 Lacs

noida

Work from Office

Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)

Posted 3 days ago

Apply

4.0 years

0 Lacs

navi mumbai, maharashtra, india

On-site

About IRIS IRIS Business Services Limited (IRIS) is a leading regtech SaaS provider listed on both the BSE and NSE. Established in 2000, IRIS empowers over 30 regulators and 6,000 enterprises across 54+ countries, positively impacting more than 2 billion lives. Our innovative solutions transform regulatory compliance into a competitive business advantage. Headquartered in Mumbai, IRIS operates subsidiaries in the USA, Singapore, Malaysia, and Italy, with an affiliate in the UAE. IRIS is also a proud member of XBRL jurisdictions worldwide, including XBRL International, India, Europe, South Africa, and the USA. In India, IRIS Is An Authorized GST Suvidha Provider And a Private Invoice Registration Portal. Our Commitment To Digital Innovation Has Earned Us Numerous Accolades, Such As To read more about IRIS visit our website: www.irisbusiness.com Key Responsibilities Develop and deploy AI/ML models for document understanding, including: Text extraction, section classification, and table detection from PDFs Semantic similarity and concept mapping to XBRL taxonomies Build and fine-tune NLP models (transformers, embeddings, entity recognition) for financial texts Collaborate with product and data engineering teams to integrate AI pipelines with ETL and tagging engines Implement human-in-the-loop workflows for model feedback and active learning Optimize model performance across diverse document formats and industries (MFRS, IFRS, GRI, etc.) Track model metrics, validate outputs, and handle retraining and continuous learning cycles Maintain clean, modular, and reusable code using MLOps best practices (versioning, reproducibility, CI/CD) Required Skills & Qualifications 1–4 years of hands-on experience in AI/ML model development and deployment Strong programming skills in Python, with experience in libraries like scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, OpenAI Proven experience with NLP tasks: classification, NER, information extraction, embeddings Experience with PDF and document parsing tools: PDFMiner, LayoutLM, Camelot, Tesseract, etc. Experience with cloud platforms, particularly Azure (Azure Machine Learning, Azure AI Foundry, Azure Form Recognizer, Azure Cognitive Services) and optionally AWS/GCP Experience with model serving and deployment in production environments (e.g. Docker, Azure Kubernetes Service) Working knowledge of MLOps frameworks (MLflow, DVC, Airflow, etc.) Exposure to XBRL, XML, or structured financial reporting formats is a strong advantage Strong problem-solving and analytical skills with attention to detail Exposure to Large Language Models (LLMs) & Retrieval-Augmented Generation (RAG) Experience in using openpyxl in python Familiarity with Office JS integration Good to Have Familiarity with financial statement structures and accounting terminology Experience working in regulatory technology, compliance platforms, or SupTech Understanding of XBRL taxonomies (IFRS, MFRS, GRI, SEC) Educational Qualifications Bachelor’s or master’s degree in: Computer Science / Engineering Mathematics, Statistics or related quantitative fields Awards won by IRIS Won recognition as Indias best Fintech at the Financial Express Best Banks Awards. an award that was presented to our CEO by Smt Nirmala Sitharaman, Finance Minister, Govt of India. IRIS has been selected as the Best Tax Technology Service Provider 2022 in category National Taxation Awards at the prestigious TIOL Awards. IRIS CARBON has won The Most Agile/Responsive SaaS Solution of the Year award at the 2022 SaaS Awards by Awarding and Consultancy International. At IRIS CARBON, we are committed to creating a diverse and inclusive environment. We are an equal opportunity employer and welcome applicants from all backgrounds.

Posted 3 days ago

Apply

10.0 - 12.0 years

10 - 20 Lacs

chennai

Work from Office

Requirements Elicitation, Understanding, Analysis, & Management • Understand the project's Vision and requirements, and contribute to the creation of the supplemental requirements, building the low-level technical specifications for a particular platform and/or service solution. Project Planning, Tracking, & Reporting • Estimate the tasks and resources required to design, create (build), and test the code for assigned module(s). • Provide inputs in creating the detailed schedule for the project. • Support the team in project planning activities, in evaluating risks, and shuffle priorities based on unresolved issues. • During development and testing, ensure that assigned parts of the project/modules are on track with respect to schedules and quality. • Note scope changes within the assigned modules and work with the team to shuffle priorities accordingly. • Communicate regularly with the team about development changes, scheduling, and status. • Participate in project review meetings. • Tracking and reporting progress for assigned modules Design: • Create a detailed (LLD) design for the assigned piece(s) with possible alternate solutions. • Ensure that LLD design meets business requirements. • Submit the LLD design for review. • Fix the detailed (LLD) design for the assigned piece(s) for the comments received from team. Development & Support • Build the code of high-priority and complex systems according to the functional specifications, detailed design, maintainability, and coding and efficiency standards. • Use code management processes and tools to avoid versioning problems. • Ensure that the code does not affect the functioning of any external or internal systems. • Perform peer reviews of code to ensure it meets coding and efficiency standards. • Act as the primary reviewer to review the application code created by software engineers to ensure compliance to defined standards. Recommend changes to the code as required. Testing & Debugging • Attend the Test Design walkthroughs to help verify that the plans and conditions will test all functions and features effectively. • Perform impact analysis for issues assigned to self and Software Engineers /Sr Engineers. • Actively assist with project- and code-level problem solving, such as suggesting paths to explore when testing engineers or software engineers encounter a debugging problem, and escalate urgent issues. Documentation • Review technical documentation for the code for accuracy, completeness, and usability. • Document and maintain the reviews conducted and the unit test results. Process Management • Adhere to the project and support processes. • Adhere to best practices and comply with approved policies, procedures, and methodologies, such as the SDLC cycle for different project sizes. • Shows responsibility for corporate funds, materials and resources. • Ensure adherence to SDLC and audits requirements. • Adhere to best practices and comply with approved policies, procedures, and methodologies. Coaching and Mentoring • Act as a technical subject matter expert for the internal team on areas such as system functionality and approach including solving systems operations issues, performance initiatives. Leverage existing knowledge and expertise in multiple ways. • Build team skills using formal and/or informal training sessions. • Create and maintain knowledge repositories for lessons learnt and developments in the respective domains. Key Responsibilities: - Design, develop, and implement identity and access management solutions using Okta. - Integrate Okta with internal and third-party applications using SAML, OIDC, SCIM, and API-based provisioning. - Develop and maintain custom Okta Workflows and automation scripts. - Configure and manage Okta Universal Directory, MFA, and lifecycle policies. - Collaborate with security, infrastructure, and application teams to ensure secure and seamless user experiences. - Troubleshoot and resolve issues related to authentication, authorization, and provisioning. - Maintain documentation of configurations, processes, and best practices. - Stay updated on IAM trends, Okta product updates, and security best practices. - Flexible with shifts as per the project requirements and on-call support during weekends Required Skills & Qualifications: - Experienced IAM Implementation Engineer with over 8 years of expertise in designing, deploying, and managing IAM solutions. - 5+ years of experience working with Okta Identity Cloud and Okta Access Gateway (OAG). - Strong understanding of identity federation protocols (SAML, OAuth 2.0, OIDC). - Experience with CI/CD pipelines and DevOps tools. - Experience with Okta Workflows, API integrations, and SCIM provisioning. - Proficiency in scripting languages (e.g., JavaScript, Python, PowerShell). - Familiarity with Active Directory, LDAP, and cloud platforms (AWS, Azure, GCP). - Strong problem-solving skills and attention to detail. - Experience with other IAM platforms (e.g., Azure AD/Entra ID, Ping Identity) is a plus. - Excellent communication and collaboration skills. Location: This position can be based in any of the following locations: Chennai For internal use only: R000107343

Posted 3 days ago

Apply

7.5 years

0 Lacs

ahmedabad, gujarat, india

On-site

Project Role : Business Process Architect Project Role Description : Analyze and design new business processes to create the documentation that guides the implementation of new processes and technologies. Partner with the business to define product requirements and use cases to meet process and functional requirements. Participate in user and task analysis to represent business needs. Must have skills : Data Analytics, Data Warehouse ETL Testing, Hadoop Administration, Big Data Analysis Tool and Techniques Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : Specific undergraduate qualifications ie engineering computer science Summary: Experienced Data Engineer with a strong background in Azure data services and broadcast supply chain ecosystems. Skilled in OTT streaming protocols, cloud technologies, and project management. Roles & Responsibilities: - Proven experience as a Data Engineer or in a similar role. - Lead and support expert guidance to Principal Solutions & Integration. - Track and report on project progress using internal applications. - Transition customer requirements to on air operations with proper documentation. - Scope projects and ensure adherence to budgets and timelines. - Generate design and integration documentation. Professional & Technical Skills: - Strong proficiency in Azure data services Azure Data Factory, Azure Databricks, Azure SQL Database. - Experience with SQL, Python, and big data tools Hadoop, Spark, Kafka. - Familiarity with data warehousing, ETL techniques, and microservices in a cloud environment. - Knowledge of broadcast supply chain ecosystems BMS, RMS, MAM, Playout, MCR,PCR, NLE, Traffic. - Experience with OTT streaming protocols, DRM, and content delivery networks. - Working knowledge of cloud technologies Azure, Docker, Kubernetes, AWS Basics, GCP Basics. - Basic understanding of AWS Media Services Media Connect, Elemental, MediaLive, Media Store, Media 2 Cloud, S3, Glacier. Networking: - Apply basic networking knowledge including TCP or IP, UDP or IP, IGMP, DHCP, DNS and LAN or WAN technologies to support video delivery systems. Highly Desirable: - Experience in defining technical solutions with over 99.999 percentage reliability. Additional Information: - Minimum of 5 years experience in Data Analytics disciplines. - Good presentation and documentation skills, Excellent interpersonal skills. Undergraduate qualifications in engineering or computer science. - This position is based at our Bengaluru office.

Posted 3 days ago

Apply

9.0 years

0 Lacs

pune, maharashtra, india

On-site

About the Role We are looking for an experienced Senior Consultant with strong expertise in Java development and practical exposure to Generative AI (GenAI) technologies . The ideal candidate will design and deliver scalable solutions, integrate AI-driven capabilities into enterprise applications, and provide technical leadership to development teams. Key Responsibilities Design, develop, and maintain scalable applications using Java, Spring Boot, and Microservices . Architect and integrate Generative AI solutions (e.g., LLMs, embeddings, RAG pipelines) into business applications. Collaborate with data scientists and AI engineers to operationalize models. Develop and optimize APIs for AI/ML services and ensure secure, efficient integration. Implement cloud-native solutions on platforms such as AWS / Azure / GCP. Guide and mentor junior engineers, conducting code reviews and knowledge-sharing sessions. Partner with business stakeholders to translate requirements into technical architecture & solutions . Ensure delivery aligns with best practices in software engineering, CI/CD, testing, and TDD/BDD . Required Skills & Experience 4–9 years of strong development experience in Java, Spring Boot, Microservices . Hands-on experience with Generative AI / LLMs (e.g., OpenAI, Anthropic, Hugging Face, LangChain, RAG pipelines). Strong knowledge of RESTful APIs, GraphQL, and event-driven architectures . Experience with cloud services (AWS Lambda, S3, DynamoDB / GCP Vertex AI / Azure Cognitive Services). Exposure to vector databases (e.g., Pinecone, Weaviate, FAISS, Milvus). Familiarity with prompt engineering and AI model fine-tuning . Solid understanding of data pipelines and integrating AI services into enterprise workflows . Strong grasp of software engineering best practices : TDD, CI/CD, Agile methodologies. Good to Have Experience in Python for AI/ML integrations. Familiarity with Kubernetes, Docker, Terraform for deployment. Knowledge of MLOps practices . Prior consulting experience in digital transformation / AI-led initiatives .

Posted 3 days ago

Apply

0 years

0 Lacs

bengaluru, karnataka, india

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Role Overview Gradient Specialist Program is our differentiator training program that focuses on preparing recent graduates for their careers in technology consulting. Learning from our very own Google Cloud Certified experts, we nurture your talent and accelerate your learning through structured training, hands-on building, and mentorship. Completing this program means you are a GCP Certified 66degrees Specialist prepared for your career with the fastest-growing Google Premier Partner. Are you ready to become our next GCP Specialist? This exciting opportunity is based out of our Bengaluru office. Responsibilities A Gradient Specialist’s responsibilities and duties are as follows: Complete the Gradient Development Program training. Pursue and obtain Google Cloud Platform Certifications based on your matched career track.. Work with technical and business leads to transfer global business requirements into sound solutions. Qualifications Graduated a Bachelor’s Degree in Computer Science, Statistics, Data Science or similar, in 2025 Programming or scripting experience in any language, Python preferred. Experience with AI/ML, Data Analytics, or Data Science. Strong interpersonal, verbal, and written communication skills. Ability to use analytical skills to solve complex problems. Strong organizational skills including the ability to prioritize, handle multiple projects simultaneously, and meet deadlines. Self-motivated and able to work independently or as part of a team. Ability to commit to our Gradient Specialist Program and the technical career that follows. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 3 days ago

Apply

0 years

0 Lacs

bengaluru, karnataka, india

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Role Overview Gradient Specialist Program is our differentiator training program that focuses on preparing recent graduates for their careers in technology consulting. Learning from our very own Google Cloud Certified experts, we nurture your talent and accelerate your learning through structured training, hands-on building, and mentorship. Completing this program means you are a GCP Certified 66degrees Specialist prepared for your career with the fastest-growing Google Premier Partner. Are you ready to become our next GCP Specialist? This exciting opportunity is based out of our Bengaluru office. Responsibilities A Gradient Specialist’s responsibilities and duties are as follows: Complete the Gradient Development Program training. Pursue and obtain Google Cloud Platform Certifications based on your matched career track.. Work with technical and business leads to transfer global business requirements into sound solutions. Qualifications Graduated a Bachelor’s Degree in Computer Science, Statistics, Data Science or similar, in 2025 Programming or scripting experience in any language, Python preferred. Experience with AI/ML, Data Analytics, or Data Science. Strong interpersonal, verbal, and written communication skills. Ability to use analytical skills to solve complex problems. Strong organizational skills including the ability to prioritize, handle multiple projects simultaneously, and meet deadlines. Self-motivated and able to work independently or as part of a team. Ability to commit to our Gradient Specialist Program and the technical career that follows. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 3 days ago

Apply

10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. We believe in the power of diversity and inclusion and cultivate a workplace culture of belonging that views uniqueness as a competitive edge and builds a community that enables our people to push the limits of innovation to make great products that create value and improve people's lives. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Principal Network Engineer - IT located in Chennai location. We are seeking a highly experienced and skilled Senior Network Engineer to join our Global Networks and Telecoms (GNT) team. In this role, you will be responsible for designing, implementing, and managing complex on-prem and cloud-based networking solutions for Flex with over 150 sites across 35 countries. The ideal candidate should have extensive experience in on-prem and cloud networking technologies, a deep understanding of multiple network protocols and technologies, as well as strong analytical and problem-solving skills and experience working with large enterprise networks. The right candidate will be comfortable in a fast-moving organization and enjoy digging into operational problems to implement the process enhancement and technical solutions to solve them and is able to quickly learn and pick up new domain expertise and technologies. The candidate will work together with a larger global network team to support Flex’s global network operations across Asia, Europe, and the Americas. Reporting to the Director , and the role involves: What a typical day looks like: Designing, building, and maintaining complex highly scalable and available network solutions deployed on-prem and in public clouds (such as AWS, Azure, GCP), and/or hybrid environments. Support the DevOps cultural transformation of the global network services team at Flex by promoting, planning, and driving the deployment of new processes to eliminate wasteful, repetitive tasks that can be automated. Provide technical guidance and mentorship to other network engineers and participate in the training and development of team members. Create and maintain detailed documentation including network diagrams, configurations, policies, procedures, and runbooks. Staying current, evaluate & recommend the latest network technologies, security threats, and industry best practices. This involves continuous learning and professional development to ensure that the network infrastructure supports business growth and transformation. Collaborate with cross-functional teams, including application engineers, operations, and security, to ensure the integrity, security, and scalability of our cloud-based infrastructure Act as a senior escalation point for complex network incidents, performing deep-dive diagnostics and root cause analysis Deploy and configure enterprise network infrastructure including switches, routers, firewalls, load balancers, ADCs, VPNs, NAC, WLCs, ZeroTrust, SD-WAN, and hybrid cloud solutions. Consults on issues and requests from internal customers that require the implementation of on-prem and cloud networking infrastructure solutions. Ensure optimal performance, resilience, and high availability of enterprise network infrastructure across regions. Evaluate and integrate emerging technologies such as SASE, ZTNA, SDN, and AI-driven networking (AIOps). The experience we’re looking to add to our team: Batchelor’s or Master’s degree in Computer Science, Computer Engineering, or a related technical discipline, or equivalent experience. 10+ years’ experience in deploying and maintaining both on-prem and cloud network infrastructure. Proven experience managing a global enterprise networks spanning data centers, branches, and cloud environments for a multinational company Expert knowledge of networking protocols and technologies such as TCP/IP, DNS, BGP, OSPF, VLAN, VPN, IPSEC, QoS, WAN, LAN, WLAN, packet capture & analysis etc. Knowledge & experience of SDWAN technologies, preferably Fortinet. Deep understanding of network security concepts and best practices and extensive experience in next-generation firewall (NGFW), preferably Fortinet & Palo Alto. Solid working experience with leading network solutions from Fortinet, Juniper, Cisco, F5, Palo Alto, Alkira NaaS, and HPE Aruba. Excellent communication skills, with the ability to clearly articulate complex technical issues to both technical and non-technical stakeholders. Have a good understanding of the following: Linux Python / Ansible / Terraform Git / GitHub / GitLab Infrastructure as Code CI/CD Pipelines Desired Certifications Fortinet Certified Professional. Juniper JNCIP-DC Certification or equivalent. CCNP or CCIE Enterprise or Security. F5 Certified BIG-IP Administrator or higher certification. What you’ll receive for the great work you provide: Health Insurance Paid Time Off NK99 Site Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We celebrate diversity and do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).

Posted 3 days ago

Apply

7.0 - 9.0 years

4 - 8 Lacs

gurugram

Work from Office

Role Description : As a Technical Lead - Cloud Security at Incedo, you will be responsible for designing and implementing security solutions for cloud-based environments. You will work with clients to understand their security needs and design security solutions that meet those needs. You will be skilled in cloud security technologies such as Amazon Web Services (AWS) Security, Microsoft Azure Security, or Google Cloud Platform (GCP) Security and have experience with security architecture design patterns such as multi-factor authentication and encryption. Roles & Responsibilities: Developing and implementing cloud security strategies and policies Conducting security audits and assessments Collaborating with other teams to ensure compliance with security regulations and standards Troubleshooting and resolving security issues Providing guidance and mentorship to junior cloud security specialists Staying up-to-date with industry trends and best practices in cloud. Technical Skills Skills Requirements: Understanding of cloud security concepts such as data protection, identity and access management, or encryption. Familiarity with compliance frameworks such as SOC 2, HIPAA, or PCI DSS. Experience with cloud security tools such as AWS Identity and Access Management (IAM), Azure Active Directory, or Google Cloud IAM. Knowledge of network security and security monitoring technologies. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 days ago

Apply

7.0 - 9.0 years

7 - 12 Lacs

hyderabad

Work from Office

Role Description : As a Technical Lead - Cloud Data Platform (GCP) at Incedo, you will be responsible for managing and optimizing the Google Cloud Platform environment, ensuring its performance, scalability, and security. You will work closely with data analysts and data scientists to develop data pipelines and run data science experiments. You will be skilled in cloud computing platforms such as AWS or Azure and have experience with big data technologies such as Hadoop or Spark. You will be responsible for configuring and optimizing the GCP environment, ensuring that data pipelines are efficient and accurate, and troubleshooting any issues that arise. You will also work with the security team to ensure that the GCP environment is secure and complies with relevant regulations. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Google Cloud Platform (GCP) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of GCP services and tools such as Google Cloud Storage, Google BigQuery, and Google Cloud Dataflow Experience in building scalable and reliable data pipelines using GCP services, Apache Beam, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on GCP Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 days ago

Apply

7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Description Fiche de poste : Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. Requirements Bachelor's or master's degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps And CI/CD Pipeline Development Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security And IAM Management Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring And Optimization Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 3 days ago

Apply

7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Description Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. Requirements Bachelor's or master's degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps And CI/CD Pipeline Development Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security And IAM Management Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring And Optimization Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 3 days ago

Apply

0 years

0 Lacs

india

On-site

Key Responsibilities: · The Advanced Analytics Professional supports the collection, analysis, interpretation, and presentation of data to support the strategic decision making · Supports the acquisition, processing, integration, and cleaning of data from multiple sources · Uses a variety of tools to automate data collection and build reports guided by procedures · Undertakes initial data investigation and data analysis to identify trends in data, deriving insights to help deliver business improvement · Designs and builds data visualization to engage audience in a compelling way and to enable effective storytelling · Supports the development and refinement of data dashboards and reports · Supports the presentation of data insights to relevant stakeholders for planning and decision support · Supports in the implementation of ways to improve working processes within the area of data analytics · Owns contribution to team and data governance processes, policies & regulations. Follows best practices and agile methodology, owning sprint goals and participating in sprint activities and governance Design and develop advanced, interactive dashboards and analytics solutions using Qlik Sense to meet complex business needs. Develop and maintain sophisticated data models, visualizations, and KPI metrics for diverse departments and functions. Perform advanced data analysis, including trend analysis, forecasting, and predictive modeling to generate actionable insights. Integrate multiple data sources, including structured and unstructured data, into Qlik Sense to provide a comprehensive view of enterprise data. Collaborate with data engineers, data scientists, and business stakeholders to define analytical requirements and translate them into technical solutions. Optimize and fine-tune Qlik Sense applications for maximum performance and usability. Conduct data validation, quality checks, and troubleshooting to ensure accuracy and integrity of analytics solutions. Share insights and recommended actions clearly and effectively with both technical and non-technical audiences. Stay updated with the latest advances in analytics, data science, and Qlik Sense features to enhance capabilities and methodologies. Train and mentor junior analysts and end-users on advanced Qlik Sense functionalities and analytical techniques. Key Skills: · Data Analysis / Data Preparation - Advanced · Dataset Creation / Data Visualization - Advanced · Data Quality Management - Intermediate · Programming / Scripting - Intermediate · Data Storytelling - Intermediate · Business Analysis / Requirements Analysis - Intermediate · Data Dashboards - Advanced · Business Intelligence Reporting - Advanced · Database Systems - Intermediate · Agile Methodologies / Decision Support- Foundation Technical Skills: · Cloud - GCP Fundamental (Compute, BI Tools, Data Query) - Intermediate · Coding Lang – Sql, R, Python - Advanced · Libaries - SkLearn, Tensorflow, Matplotlib etc- Intermediate · Database - Big Query, Sql, ETL Intermediate · Visualization tools – Qlik – Advanced to Expert Job Summary: We are looking for a highly skilled Advanced Analytics Professional with expertise in Qlik Sense to design, develop, and deliver advanced analytics solutions that drive data-driven decision-making. The ideal candidate will leverage deep analytical skills and extensive knowledge of Qlik Sense to create sophisticated dashboards, data models, and predictive analytics for complex business problems. Required Skills and Qualifications: Extensive experience with Qlik Sense development, including complex scripting, data modeling, and visualization. Strong understanding of advanced analytics concepts including statistical analysis, predictive modeling, and data mining. Proficiency in data analysis tools such as R, Python, or SAS is a plus. Strong SQL skills for data extraction and transformation. Familiarity with data warehousing and ETL processes. Knowledge of machine learning algorithms and deployment is an advantage. Excellent problem-solving, analytical, and critical-thinking skills. Strong communication skills for translating complex data insights into clear business narratives. Relevant certifications (e.g., Qlik Sense Data Architect, Data Analyst, or similar) are preferred. Preferred Qualifications: Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Business Analytics, or related fields. Experience working in industries such as finance, healthcare, retail, or manufacturing. Knowledge of cloud platforms and big data technologies.

Posted 3 days ago

Apply

1.0 years

0 Lacs

bengaluru, karnataka, india

On-site

At Abnormal AI, we are on a thrilling mission to safeguard the world's largest enterprises against a vast range of relentless email and collaboration application cyber security attacks. Our relentless pursuit involves crafting an exceptional suite of products that empowers customers to seamlessly visualize, expertly control, and fearlessly combat cyber-security threats. What We’re Looking For: Ownership & Impact: You’re growth-oriented, take ownership of your work, and are eager to make a significant impact. Continuous Learner: Looking to grow as an engineer as part of a strong team, learning from established engineers, product managers, and designers. Strong Communicator: You excel in communication, with the ability to work autonomously and asynchronously with different teams. Must Have Skills 1+ years of professional experience in software development. Backend development experience with Python Proficiency in frontend frameworks like React, Next Familiarity with LLM's and AI development tools such as Cursor, GitHub Copilot, or Claude Experience / passion in building scalable, enterprise-grade applications. Knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Strong fundamentals in computer science, data structures, and performance optimization. Nice To Have Skills Familiarity with Golang Familiarity with ChatGPT, Cursor and other GenAI productivity tools. Familiarity with Django and GraphQL What You’ll Do As part of our engineering team, you will: Leverage AI-powered Development – Use Cursor, Copilot, and other AI tools to enhance productivity, optimize workflows, and automate repetitive tasks. Develop Data-Driven Applications – Build consumer-grade interfaces and APIs that power our advanced behavioral AI insights. Combat Modern Cyber Threats – Design and deploy secure, scalable systems that detect and prevent sophisticated cyberattacks. Collaborate with Fortune 500 Enterprises – Work with customers and security teams to rapidly iterate and deliver impactful solutions. Build at Scale – Design backend services and cloud architectures that support billions of security events across enterprises worldwide. 🚀 Ready to be part of AI transformation at Abnormal AI? Apply Now! Once you apply, you’ll be invited to our AI-powered Development Challenge, where you’ll gain hands-on experience with AI-powered tools like Cursor and Copilot to build real-world application features. This challenge is a take-home assignment, requiring 2-4 hours of work to be completed within one week - apply when you’re ready! Abnormal AI is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status or other characteristics protected by law. For our EEO policy statement please click here . If you would like more information on your EEO rights under the law, please click here .

Posted 3 days ago

Apply

0.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

Company Description Metro Global Solution Center (MGSC) is internal solution partner for METRO, a €30.5 Billion international wholesaler with operations in 31 countries through 625 stores & a team of 93,000 people globally. Metro operates in a further 10 countries with its Food Service Distribution (FSD) business and it is thus active in a total of 34 countries. MGSC, location wise is present in Pune (India), Düsseldorf (Germany) and Szczecin (Poland). We provide HR, Finance, IT & Business operations support to 31 countries, speak 24+ languages and process over 18,000 transactions a day. We are setting tomorrow’s standards for customer focus, digital solutions, and sustainable business models. For over 10 years, we have been providing services and solutions from our two locations in Pune and Szczecin. This has allowed us to gain extensive experience in how we can best serve our internal customers with high quality and passion. We believe that we can add value, drive efficiency, and satisfy our customers. Website: https://www.metro-gsc.in Company Size: 600-650 Headquarters: Pune, Maharashtra, India Type: Privately Held Inception: 2011 Job Description Who we are At METRO, we drive technology for one of the world’s leading international food wholesalers — METRO. From e-commerce to checkout and delivery software, we build products that make each day a success for our customers and colleagues. With passion and ownership, we shape the future of wholesale. We are looking for… A senior full stack engineer with deep expertise in both frontend and backend technologies. Someone with strong leadership skills who can guide architectural discussions, mentor engineers, and drive technical excellence. An engineer with proven experience in complex, distributed systems and workflow-driven architectures (Camunda/CIB7, Istio, microservices). This role matters to us… As a Senior Full Stack Engineer / Tech Lead, you will play a key role in METRO’s global Quality Management System, which harmonizes and streamlines quality assurance processes across all entities. This solution is built upon a large-scale codebase that integrates backend services and a complex monolithic frontend. Your contribution will be pivotal in guiding the split into modular, scalable components while ensuring reliability and design consistency. You will also lead technical discussions, align with architects, and mentor other engineers. Key Responsibilities Design, develop, and maintain both frontend (React, Redux, Material UI) and backend (Java, Spring Boot) components. Lead the modernization and modularization of a very large monolithic frontend & backend codebase. Collaborate with architects and product managers to align on long-term technical strategies and system design. Mentor and support mid-level engineers, fostering a culture of knowledge sharing and high code quality. Ensure system performance, security, and scalability across frontend and backend layers. Promote clean code practices, automated testing, and CI/CD pipelines to maintain development excellence. Work closely with DevOps and platform teams to ensure cloud-native deployments on GCP with Kubernetes and Istio. Qualifications Must-Have Qualifications Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or equivalent practical experience. Work Experience & Skills Proven hands-on experience with frontend frameworks (React, Redux, Material UI, HTML5, CSS3, JavaScript/TypeScript). Extensive backend experience with Java, Spring Boot, and microservices architectures. Strong experience working with Camunda (preferably CIB7) for workflow automation. Experience with Istio or other service mesh technologies. Experience with relational and NoSQL databases (PostgreSQL, MongoDB). Hands-on experience with Docker, Kubernetes, and CI/CD pipelines (GitHub Actions, Jenkins X, or similar). Proficiency in automated testing across frontend and backend components. Excellent English communication skills (written and spoken), with ability to collaborate across roles and cultures. Other Requirements Ability to balance frontend user experience with backend scalability and performance. Leadership skills — proven experience in mentoring and guiding engineering teams. Strong problem-solving mindset with a process-oriented approach. Nice-to-Have Experience splitting monolithic systems into modular architectures. Familiarity with cloud observability tools (Prometheus, Grafana, DataDog, GCP Monitoring). Knowledge of security best practices in distributed architectures (OAuth2, RBAC, mTLS). Experience participating in UI/UX design discussions and collaborating with designers or users.

Posted 3 days ago

Apply

10.0 years

0 Lacs

gurugram, haryana, india

On-site

Company Description Launched in 2007 by Aloke Bajpai & Rajnish Kumar, ixigo is a technology company focused on empowering Indian travelers to plan, book and manage their trips across rail, air, buses and hotels. ixigo assists travelers in making smarter travel decisions by leveraging artificial intelligence, machine learning and data science-led innovations on ixigo’s OTA platforms, including websites and mobile applications. Job Description We are looking for an experienced and visionary Android Architect to define and drive the technical direction of our Android ecosystem. In this role, you will be responsible for setting the architectural roadmap, building scalable and performant solutions, and guiding the team in creating best-in-class mobile experiences that delight millions of users Key Responsibilitie s: Define and own the overall Android application architecture, ensuring scalability, modularity, and maintainability. Collaborate with product, design, and backend teams to design end-to-end solutions and deliver seamless user experiences. Provide technical leadership and mentorship to Android developers, conducting design/architecture reviews and enforcing coding best practices. Evaluate, recommend, and implement the latest Android frameworks, libraries, and tools to keep the stack modern and efficient. Drive initiatives around app performance optimization, security, offline capabilities, and reliability. Architect and optimize CI/CD pipelines for faster, safer, and more automated releases. Lead proof-of-concept (POC) projects to validate new ideas and approaches. Act as the subject matter expert on Android technologies and advocate for mobile-first thinking across the organisation. Troubleshoot complex production issues and provide scalable long-term solution s. Qualifications 10+ years of professional Android development experience, with at least 3+ years in an architect/lead role. Strong expertise in Kotlin (and Java) with deep knowledge of Android SDK, Jetpack components, MVVM/MVI, and modular app architecture. Proven experience in designing and scaling large, consumer-facing Android apps. Strong grasp of performance optimization, multithreading, memory management, and security best practices. Hands-on experience integrating with RESTful APIs, GraphQL, third-party SDKs, and cloud services (Firebase, AWS, GCP, etc.). Familiarity with CI/CD tools (Gradle, Jenkins, GitHub Actions, Bitrise) and automated testing frameworks (JUnit, Espresso, Mockito). Demonstrated ability to lead teams, influence stakeholders, and drive architectural decisions. Excellent problem-solving, communication, and leadership skills.Passion for mobile innovation and eagerness to stay ahead of emerging trends.

Posted 3 days ago

Apply

0 years

0 Lacs

bengaluru east, karnataka, india

On-site

1 Identifies potential risks and develops backup plans proactively and reviews from time to time to ensure risk mitigation tactics are aligned. 2 Looks at resolving issues from a long term perspective bringing various stakeholders together to lay down and discuss the alternatives processes to prevent the reoccurrence of the problem. 3 Understand drivers and constraints of execution in terms of time and resource allocation, reviews and prioritizes activities related to multiple area in line with changing work requirement. 4 Providing line management, leadership and strategic direction for the function and liaising closely with other managers. 5 Leads the preparation and authorizes the implementation of necessary information security policies, standards, procedures and guidelines, in conjunction with the Security Committee. 1 Perform security requirement analysis and highlight the risks and recommend mitigation controls. 2 Formulation of security policies, Process and procedure’s. 3 Involvement in strategy planning, cross functional liaison, Customer interfacing, Service Level Measurement, Risk management, Service portfolio planning, Technical Audits, Information security monitoring, Budgeting and Resource Planning. 4 Review IT security architecture, 3rd Party vendor security assessment and Product evaluations. 5 Leads the design and development of security architectures for different types of cloud and cloud/hybrid systems. 6 Information security pre-engagement support. Certifications required - CISSP/CCSP/CISM/Comptia Sec+/CISA/ AWS Cloud practitioner/Azure/ GCP certifications/AWS Certified solution architect - associate/professional/AWS certified security – specialty/Azure security engineer associate (AZ-500)/Google cloud - Associate/Professional

Posted 3 days ago

Apply

3.0 years

0 Lacs

new delhi, delhi, india

On-site

Job Title: Senior Full Stack Developer Location: New Delhi Job Type: Full-time Experience Level: 3-6 years About Genefied Genefied is a B2B vertical SAAS company based in New Delhi, offering innovative Loyalty solutions for over 100+ consumer brands. Their solutions focus on accelerating loyalty, customer retention, and revenue growth for brands across various industries like Apparel, Bathware, Plywood, auto parts, and FMCG. Genefied provides products such as GenuineMark, SupplyBeam, and Rewardify to help brands protect against counterfeit products, collect consumer data, and enhance retailer models. Job Description: We are looking for a skilled Full Stack Developer to join our dynamic team with Expertise in domain. As a Sr. Full Stack Developer, you will be responsible for developing and maintaining both front-end and back-end applications. You should be comfortable with both front-end and back-end technologies and have a passion for coding. Responsibilities: Develop user interfaces for modern web applications using React.js. Develop RESTful or GraphQL APIs with Node.js. Design data schemas in SQL (PostgreSQL/MySQL) and NoSQL (MongoDB) stores. Utilize Redux for state management in complex applications Containerize applications using Docker/Kubernetes and deploy on AWS/GCP/Azure. Implement automated testing, conduct code reviews, and manage pipelines. Troubleshoot and debug issues across the entire stack Excellent problem-solving and communication skills Ability to work in a fast-paced environment and adapt to changing requirements Required Skills 3–6 years of experience in full-stack web development. Strong front-end skills with HTML5, CSS3, JavaScript, and modern frameworks (React js). Proficiency in server-side programming (Node.js). Database design with SQL and NoSQL systems. Experience with RESTful/GraphQL API development. Familiarity with Git and CI/CD pipelines. Working knowledge of Docker/Kubernetes and cloud platforms. Excellent debugging, security mindset, and Agile collaboration skills. Preferred Experience Built or scaled SaaS/B2B dashboards and platforms.. Exposure to analytics, real-time data, and data visualization. Direct experience working on coupon redemption-based websites or platforms Designing and implementing secure coupon validation and redemption workflows. Building user-friendly interfaces for entering, scanning, or applying promo codes. Preventing coupon fraud through techniques such as code tracking, one-time-use enforcement, and usage limits. Preferred Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field Experience with serverless architectures Familiarity with Agile development methodologies If you are passionate about building cutting-edge web applications and enjoy working in a collaborative environment, we would love to hear from you!

Posted 3 days ago

Apply

7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Description Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. Requirements Bachelor's or master's degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps And CI/CD Pipeline Development Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security And IAM Management Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring And Optimization Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 3 days ago

Apply

3.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You will be part of the FCSD analytics team, playing a critical role in leveraging data science to drive significant business impact within Ford Customer Service Division. As a Data Scientist, you will translate complex business challenges into data-driven solutions. This involves partnering closely with stakeholders to understand problems, working with diverse data sources (including within GCP), developing and deploying scalable AI/ML models, and communicating actionable insights that deliver measurable results for Ford. Responsibilities Job Responsibilities: Build an in-depth understanding of the business domain and data sources, demonstrating strong business acumen. Extract, analyze, and transform data using SQL for insights. Apply statistical methods and develop ML models to solve business problems. Design and implement analytical solutions, contributing to their deployment, ideally leveraging Cloud environments. Work closely and collaboratively with Product Owners, Product Managers, Software Engineers, and Data Engineers within an agile development environment. Integrate and operationalize ML models for real-world impact. Monitor the performance and impact of deployed models, iterating as needed. Present findings and recommendations effectively to both technical and non-technical audiences to inform and drive business decisions. Qualifications Qualifications: At least 3 years of relevant professional experience applying data science techniques to solve business problems. This includes demonstrated hands-on proficiency with SQL and Python. Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Computer Science, Mathematics, Engineering, Economics). Hands-on experience in conducting statistical data analysis (EDA, forecasting, clustering, hypothesis testing, etc.) and applying machine learning techniques (Classification/Regression, NLP, time-series analysis, etc.). Technical Skills: Proficiency in SQL, including the ability to write and optimize queries for data extraction and analysis. Proficiency in Python for data manipulation (Pandas, NumPy), statistical analysis, and implementing Machine Learning models (Scikit-learn, TensorFlow, PyTorch, etc.). Working knowledge in a Cloud environment (GCP, AWS, or Azure) is preferred for developing and deploying models. Experience with version control systems, particularly Git. Nice to have: Exposure to Generative AI / Large Language Models (LLMs). Functional Skills: Proven ability to understand and formulate business problem statements. Ability to translate Business Problem statements into data science problems. Strong problem-solving ability, with the capacity to analyze complex issues and develop effective solutions. Excellent verbal and written communication skills, with a demonstrated ability to translate complex technical information and results into simple, understandable language for non-technical audiences. Strong business engagement skills, including the ability to build relationships, collaborate effectively with stakeholders, and contribute to data-driven decision-making.

Posted 3 days ago

Apply

5.0 years

6 - 8 Lacs

gurgaon

On-site

About the Role We are seeking a highly skilled Data Science Engineer with 5+ years of experience to join our client’s fast-growing SaaS business. The ideal candidate will work at the intersection of data engineering, machine learning, and product development , building scalable data solutions that power business decisions and enhance product performance. Key Responsibilities Design, develop, and maintain data pipelines and ETL processes for large-scale data. Build, train, and deploy machine learning models to solve business problems and optimize SaaS product features. Collaborate with product, engineering, and business teams to translate requirements into data-driven solutions. Perform statistical analysis, predictive modeling, and A/B testing to generate insights. Ensure data accuracy, quality, and governance across all systems. Implement and optimize data storage, retrieval, and real-time processing solutions. Monitor model performance and continuously improve algorithms. Document processes, models, and best practices for reproducibility. Key Requirements 5+ years of experience as a Data Science Engineer / Machine Learning Engineer. Strong proficiency in Python, SQL, and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with big data technologies (Spark, Hadoop, Kafka) and cloud platforms (AWS, GCP, Azure). Deep understanding of statistical modeling, data mining, and predictive analytics . Experience with data pipelines and workflow orchestration tools (Airflow, Luigi, Prefect). Strong problem-solving and analytical skills with the ability to work in fast-paced SaaS environments. Excellent communication skills to present findings to both technical and non-technical stakeholders. Nice to Have Exposure to NLP, recommendation systems, or deep learning applications . Prior experience in scaling SaaS platforms using AI/ML. Familiarity with MLOps tools (MLflow, Kubeflow, Docker, Kubernetes). What We Offer Opportunity to work with a fast-growing SaaS company at the forefront of innovation. A dynamic environment where your work directly impacts product growth and customer success . Career growth in advanced data science, AI, and product engineering.

Posted 3 days ago

Apply

2.0 years

4 - 9 Lacs

gurgaon

On-site

Job Title: Backend Developer (Node.js) We are seeking a Backend Developer with at least 2 year of professional experience to join our growing team. The ideal candidate will have strong expertise in Node.js, REST API development, Git version control, MySQL, and MongoDB , along with a basic understanding of frontend technologies. You should be comfortable working with Linux (Ubuntu) environments , handling file systems , and integrating APIs. Experience with authentication, scalability, and performance optimization will be a strong advantage. Responsibilities Design, develop, and maintain server-side applications using Node.js . Build and maintain RESTful APIs for frontend and third-party integrations. Work with AJAX and API integration for seamless client-server communication. Handle file uploads, file system operations, and storage management . Collaborate with frontend developers to ensure smooth integration between UI and backend. Implement authentication, authorization, and security best practices. Work with relational and non-relational databases: MySQL (queries, joins, procedures) and MongoDB (schemas, aggregation, indexing). Deploy and manage applications in Linux/Ubuntu environments . Use Git version control for code management, branching, and code reviews. Optimize application performance, scalability, and reliability. Debug, troubleshoot, and resolve backend-related issues. Participate in agile workflows including sprint planning, stand-ups, and code reviews. Requirements 2 year of professional experience as a Backend Developer or similar role. Strong proficiency in Node.js and JavaScript (ES6+) . Hands-on experience with: RESTful API development AJAX & API integration File handling (uploads, downloads, storage) Git version control MySQL (database design, queries, optimization) MongoDB (document modeling, indexing, aggregation) Working knowledge of Linux/Ubuntu commands & environments . Basic understanding of frontend technologies (HTML, CSS, JavaScript) for integration. Familiarity with authentication/authorization (JWT, OAuth, etc.). Strong problem-solving, debugging, and optimization skills. Good communication and teamwork abilities. Nice-to-Have (Optional Skills) Experience with Microservices architecture . Knowledge of message brokers (Kafka, RabbitMQ, etc.) . Exposure to Docker or containerization . Familiarity with cloud platforms (AWS, GCP, or Azure). Knowledge of GraphQL . Perks & Benefits Competitive salary package. Opportunity to work with modern backend technologies and scalable architectures. Collaborative and growth-oriented team environment. Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Work Location: In person Speak with the employer +91 8448491390

Posted 3 days ago

Apply

2.0 years

1 - 6 Lacs

gurgaon

On-site

Job Location: Gurgaon Last Updated On: 21 Aug 2025 Work Experience: 2+ Years Job Description We are looking for a talented Full-Stack Developer (NestJS with TurboRepo experience) with strong experience in building scalable web applications. The ideal candidate should have hands-on expertise in modern front-end and back-end technologies, with a preference for those who have worked with TurboRepo for monorepo management. This role involves working closely with cross-functional teams to design, develop, and deploy robust, high-performing solutions. Key Responsibilities Develop and maintain scalable, secure, and high-performance web applications. Work with TurboRepo for efficient monorepo setup and management. Collaborate with product managers, designers, and other developers to deliver quality software. Write clean, maintainable, and well-documented code. Implement best practices for CI/CD, testing, and deployment. Troubleshoot, debug, and optimize application performance. Stay up-to-date with emerging technologies and contribute to architectural decisions. Required Skills Strong proficiency in JavaScript/TypeScript . Proficiency in Node.JS, Nest.JS, and similar back-end frameworks. Hands-on experience with React.js/Next.js and modern front-end frameworks. Experience with databases (MySQL, PostgreSQL, MongoDB). Familiarity with TurboRepo (preferred). Understanding of REST APIs, GraphQL, and microservices. Experience with version control systems (Git/GitHub/GitLab). Knowledge of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure) is a plus. Don't miss out on our Social media updates! Click here to view our latest LinkedIn post!

Posted 3 days ago

Apply

175.0 years

4 - 8 Lacs

gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. We’re looking for a Site Reliability/Application Support Engineers (SRE/AS) responsible for web/servicing application performance, availability, and reliability. Candidate is responsible to provide consultation and strategic recommendations by quickly assessing and remediating complex platform availability issues. Site Reliability Engineering (SRE) is a continuous engineering discipline that effectively combines software development and systems engineering to build and run scalable, distributed, fault-tolerant systems. This role will ensure that American Express internal and external services have reliability and uptime appropriate to users' needs. We also ensure a continuous improvement, while keeping an ever-watchful eye, automated, on capacity and performance. This role will drive the SRE/AS mindset which strives to use software engineering to build and run better production systems. You will write software to optimize day to day work through better automation, monitoring, alerting, testing, and deployment. You’ll be expected to work with several Technology partners to identify areas of opportunity within the availability platform and build a solution to automate monitoring solutions for the modernization platform, technology, and constant innovations to drive efficiencies. You will be responsible for implementing tracing, monitoring, tooling solutions to maximize the performance and availability of our Web/Servicing applications. Qualifications BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent 6-10 years of work experience in DevOps/SRE. Experience in Genesys Engage (PureEngage), Genesys Cloud (PureCloud) or Genesys PureConnect. Good understanding of VoIP, SIP protocols and telephony infrastructure. Ability to analyze Genesys logs (SIP, T-Server logs, Interaction, WDE) to identify issues. Good understanding of call flows, IVR scripting (Genesys Composer, SCXML) and routing logic. Configuring and troubleshooting Genesys reports (GCXI, Infomart, Pulse). Analytical knowledge and exposure on root cause identification using analyzer tools like Kazimir, MyZamir and Speechminer. Experience with Oracle, SQL Server or PostgresSQL for configuration and troubleshooting. Strong understanding of TCP/IP, SIP and RTP. Knowledge of Public Cloud technologies GCP, AWS, AZURE etc. would be an advantage Hands on experience on enterprise tools set such as Grafana, Dynatrace, AppDynamics, BMC, Prometheus etc. Knowledge on Unix shell scripting, PERL or Python programming is preferred Working experience with Network load balancers, Global Traffic Managers (GTMs), Local Traffic Managers (LTMs) Hands on experience on configuring Splunk, Grafana dashboards, etc. Good understanding of Linux OS internals, performance tools, Core commands, security etc. Exposure to enterprise platform migration from dedicated to cloud environment is preferred We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 3 days ago

Apply

4.0 years

18 Lacs

coimbatore, tamil nadu, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies