Home
Jobs

15888 Gcp Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Here’s what you will do - Incident Response Leadership: Develop, manage, and coordinate cybersecurity incidents from detection through resolution, including maintaining and updating incident response playbooks. Ensure swift and effective action to minimize risks and mitigate damage. Risk Assessment & Management: Conduct regular risk assessments, develop risk management strategies, and implement controls to mitigate identified risks. Threat Intelligence and Analysis: Research and analyze emerging threats, utilizing threat intelligence platforms to enhance preparedness and proactively predict and prevent potential threats. Application Security: Develop and implement security measures to protect applications throughout their lifecycle. Collaborate with development teams to identify and address vulnerabilities in application code, ensuring secure coding practices are followed and applications are resilient against attacks. Security Solutions Development: Design and implement security solutions that align with business objectives and industry standards, ensuring compliance and robustness across all environments, including cloud platforms like AWS, GCP, and Azure. DevSecOps Advocacy: Champion secure development practices within DevOps processes, providing guidance on security best practices throughout the software development lifecycle (SDLC). Cloud Security Leadership: Lead a team of Cloud Security engineers, collaborating closely with cloud architects to embed security by design. Focus on developing and maintaining security protocols in cloud environments, and deliver strategic security solutions for cloud implementations. Operational Security Management: Oversee security monitoring of operational and production environments, ensuring threats are identified and addressed promptly while maintaining system integrity. Security Compliance Management: Ensure compliance with regulatory requirements, manage audits, and provide detailed reporting and analysis on security incidents to key stakeholders. Third-Party Risk Management: Oversee security assessments and management of third-party vendors to ensure they meet the organization's security standards. Security Training & Awareness: Develop and lead security awareness programs to educate employees on best practices and emerging threats. Stakeholder Collaboration & Reporting: Maintain strong relationships with key stakeholders, including incident response and disaster recovery teams, and communicate security concepts effectively to technical and non-technical audiences. Required Skills and Qualifications Extensive Security Experience: 5+ years in information security, with hands-on experience in forensic analysis, threat landscape understanding, and managing security in large-scale, public cloud environments (e.g., AWS, Azure, GCP). Technical Proficiency: Extensive experience with security tools across enterprise, application, CDN, and cloud security domains, coupled with proficiency in automation and scripting languages (e.g.,Python, PowerShell) to enhance security operations and streamline incident response. Security Standards Knowledge: Strong understanding of industry standards like NIST, ISO 27001,CIS, OWASP, and Zero Trust architecture, with the ability to apply them effectively. Leadership and Mentorship: Proven ability to lead and mentor security teams, fostering a collaborative and high-performance environment. Certifications: Relevant security certifications such as CISSP, SSCP, CCSP, GCIH, OSCP Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Senior Software Engineer - Fintech Marketplace Product Overview: The ideal candidate is a self-motivated, multi-tasker, and demonstrated team-player with a strong focus on customer handling. You will be a Senior Developer responsible for developing new software products and enhancing existing products. Role and Responsibilities: Develop a revolutionary finance marketplace product, focusing on design, user experience, and business logic to ensure ease of use, appeal, and effectiveness. Ensure that implementations adhere to defined specifications and processes outlined in the PRD. Own end-to-end quality of deliverables throughout all phases of the software development lifecycle. Collaborate with managers, leads, and peers to explore implementation options. Manage continuously changing business needs and function effectively in a fast-paced environment. Mentor junior engineers and foster innovation within the team. Design and develop software components and systems within the pod. Evaluate and recommend tools, technologies, and processes, driving adoption to ensure high-quality products. Requirements: Minimum 5+ years of experience in backend development, delivering enterprise-class web applications and services. Expertise in Java technologies, including Spring, Hibernate, and Kafka. Strong knowledge of NoSQL and RDBMS, with expertise in schema design. Familiarity with Kubernetes deployment and managing CI/CD pipelines. Experience with microservices architecture and RESTful APIs. Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Competent in software engineering tools (e.g., Java build tools) and best practices (e.g., unit testing, test automation, continuous integration). Experience with the cloud technologies of AWS and GCP and developing secure applications. Strong understanding of the software development lifecycle and agile methodologies. Willingness and capability to work on-site with clients to ensure project success. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

@SynapseIndia We're Hiring #Designation- Sr Software Engineer(PYTHON ) #Eperience: 4+ years #Location: NSEZ, Sector 81, Noida #OnlyImmediateJoiner #WorkFromOffice Interested Candidate , share resume on surbhib@synapseco.com JOB DESCRIPTION We are looking for a highly skilled Senior Python AI/ML Developer to join our team. The ideal candidate will have extensive experience designing, developing, and deploying machine learning models and AI solutions using Python. You will collaborate with data scientists, engineers, and product teams to build scalable, efficient, and innovative AI-driven applications. Roles & Responsibilities Design, develop, and deploy machine learning models and AI algorithms using Python and relevant libraries. Collaborate with cross-functional teams to gather requirements and translate business problems into AI/ML solutions. Optimize and scale machine learning pipelines and systems for production. Perform data pre-processing, feature engineering, and exploratory data analysis. Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or similar. Conduct experiments and evaluate model performance using statistical methods. Write clean, maintainable, and well-documented code. Mentor junior developers and participate in code reviews. Stay up-to-date with the latest AI/ML research and technologies. Ensure model deployment is seamless and models are integrated with existing infrastructure. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 5+ years of professional experience in Python programming with a focus on AI/ML. Strong experience with Python ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, etc. Solid understanding of machine learning algorithms, neural networks, and deep learning. Experience with data manipulation libraries (Pandas, NumPy) and data visualization tools (Matplotlib, Seaborn). Experience with cloud platforms (AWS, GCP, Azure) and deploying ML models using Docker, Kubernetes. Familiarity with NLP, Computer Vision, or other AI domains is a plus. Strong problem-solving skills and ability to work independently and collaboratively. Excellent communication skills. Interested Candidate , share resume on surbhib@synapseco.com #PythonDevelopment #PythonAIML #PythonDeveloper #AI/ML #CloudPlatform #Pythontools #ImmediateJoiner Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Data Engineer Experience Level: 12 months to 16 months only (Excluding internship) Job location- Bengaluru, Chennai, Gurgaon, Kolkata, Pune, Hyderabad B.Tech/BE/ME/M.Tech- 2023 & 2024 Grads only. Rest can avoid please MUST HAVE • Minimum of 12 months of Data Engineering experience. • Strong technical knowledge of tools like Azure Data Factory/Databricks/GCP/Snowflake, SQL, Python . • Experience in collaborating with business stakeholders to identify and meet data requirements • Experience in using Azure services and tools to ingest, egress, and transform data from multiple sources • Delivered ETL/ELT solutions including data extraction, transformation, cleansing, data integration and data management • Implemented batch & near real time data ingestion pipelines • Experience in working on Event-driven cloud platform for cloud services and apps, Data integration for building and managing pipeline, Data warehouse running on serverless infrastructure, Workflow orchestration using Azure Cloud Data Engineering components Databricks, Synapse, etc. • Excellent written and oral communication skills GOOD TO HAVE • Proven ability to work with large cross functional teams with good communication skills. • Experience on Cloud migration methodologies and processes • Azure Certified Data Engineer • Exposure to Azure Dev ops and Github • Ability to drive customer calls independently • Ability to take ownership and work with team Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

📖 Our Story: marketfeed (YC S21) marketfeed is a Y Combinator–funded, seed-stage fintech founded in 2020 by Sharique Samsudheen and Sooraj E . Our automated trading platform helps everyday investors grow wealth through sophisticated yet intuitive strategies. Today we serve 1.5 million+ followers , 400k app users , and manage ₹250 crore+ in traded volume—while compounding 20 %+ MoM growth with a team of 50+ passionate builders. Our next chapter: scaling from weekly positional trades into intraday systems so users capture even more market edge. 🕵️ Who are we looking for? We’re looking for a Quantitative Analyst who will help maintain and enhance our live systematic trading models, design new data-driven strategies, and collaborate closely with senior quants, developers, and traders to deliver robust, scalable research—all from our Bengaluru HQ (hybrid friendly). 🎯 You’re a great fit if you … Have 1-3 years of hands-on quantitative or data-science experience (internships count!). Hold a degree in Engineering, Mathematics, Statistics, Physics, Computer Science, or Finance. Code fluently in Python (pandas, NumPy, back-testing libs) and are comfortable with Git. Love digging into time-series data, running walk-forward tests, and using Monte-Carlo stress checks. Understand the basics of trend following, Sharpe, drawdown, and VaR—or can learn fast. Communicate clearly: you can turn a notebook of stats into a concise, decision-ready summary. Bonus points for … Experience with intraday, positional or swing strategies in BSE/NSE F&O segment. Familiarity with CI/CD, Docker, or Kubernetes in research pipelines. Exposure to risk dashboards or trade-ops tooling. 💻 Your Key Responsibilities Refine Live Strategies You’ll work on refining existing trading strategies to ensure they remain smooth in performance and compliant with internal and external guidelines. Why it matters: Keeps trading efficient and within operational constraints. Sample output: Backtest results fall within the set thresholds. Back-test New Parameter Sets You’ll revisit shelved strategies or indicators, apply fresh parameters, and evaluate their potential. Why it matters: Unlocks value from previous research and accelerates production timelines. Sample output: Turnaround of shelved strategies with improved outcomes and quick push to production. Create Strategies from Shelved Indicators You’ll develop new strategies based on previously explored but unused indicators. Why it matters: Allows for scalable business expansion without reinventing the wheel. Sample output: New strategy specification is successfully pushed to production. Produce Risk & VaR Checks You’ll conduct risk analysis and Value-at-Risk (VaR) assessments on proposed changes. Why it matters: Ensures all changes are within predefined risk guardrails. Sample output: Demonstrated reduction in VaR. Coordinate Sign-off with SQR & DevOps You’ll ensure every new deployment has the appropriate sign-offs for safety and compliance. Why it matters: Enables fast and safe promotion from testing to live environments. Sample output: Approved release ticket. Document Process Improvements You’ll update internal documentation to reflect improved workflows and methodologies. Why it matters: Builds a knowledge base and ensures continuity for future hires. Sample output: Updated pages in the research playbook. 🛠️ Preferred Skills GCP for data pulls Coding: Python proficiency Curiosity about intraday microstructure and latency constraints 🎁 What’s in it for you Ship real alpha that thousands of users rely on—your code will run in production within weeks. Mentorship from senior quants building cutting-edge systems. Flexible leave and work hours, dog-friendly office, annual off-sites, and competitive CTC + ESOPs. Comprehensive health cover (physical & mental), unlimited tele-consults, dental, annual check-ups. 💬 Our Hiring Process Intro chat – culture & motivation fit (30 min). Quant screen – probability & statistics problems (60 min). Coding/Back-test round – small Python task or notebook review (90 min). On-site with founders & SQR – deep dive into your projects and our mission (60 min). Offer & onboarding – reference checks, paperwork, start date! Ready to turn ideas into live trading code—and learn the craft of systematic alpha along the way? Apply now and help us democratize smart, risk-aware trading. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer (PySpark | GCP | DataProc) Location: Remote (Work from Anywhere – India Preferred) Experience: 5–8 Years Apply at: 📧 nikhil.kumar@krtrimaiq.ai About the Role We at KrtrimaIQ Cognitive Solutions are looking for a highly experienced and results-driven Senior Data Engineer to design and develop scalable, high-performance data pipelines and solutions in a cloud-native, big data environment. This is a fully remote role, ideal for professionals with deep hands-on experience in PySpark, Google Cloud Platform (GCP), and DataProc. Key Responsibilities:Design, build, and maintain scalable ETL/ELT data pipelines using PySpark Develop and optimize data workflows leveraging GCP DataProc, BigQuery, Cloud Storage, and Cloud Composer Ingest, transform, and integrate structured and unstructured data from diverse sources Collaborate with Data Scientists, Analysts, and cross-functional teams to deliver reliable, real-time data solutions Ensure performance, scalability, and reliability of data platforms Implement best practices for data governance, security, and quality Must-Have Skills:Strong hands-on experience in PySpark and the Apache Spark ecosystem Proficiency in working with GCP services, especially DataProc, BigQuery, Cloud Storage, and Cloud Composer Experience with distributed data processing, ETL design, and data warehouse architecture Strong SQL skills and familiarity with NoSQL data stores Knowledge of CI/CD pipelines, version control (Git), and code review processes Ability to work independently in a remote setup with strong communication skills Preferred Skills:Exposure to real-time data processing tools like Kafka or Pub/Sub Familiarity with Airflow, Terraform, or other orchestration/automation tools Experience with data quality frameworks and observability tools Why Join Us?100% Remote – Work from anywhere High-impact role in a fast-growing AI-driven company Opportunity to work on enterprise-grade, large-scale data systems Collaborative and flexible work culture 📩 Interested candidates, please send your resume to: nikhil.kumar@krtrimaiq.ai #SeniorDataEngineer #RemoteJobs #PySpark #GCPJobs #DataProc #BigQuery #CloudDataEngineer #DataEngineeringJobs #ETLPipelines #ApacheSpark #BigDataJobs #GoogleCloudJobs #CloudDataEngineer #HiringNow #DataPipelineEngineer #WorkFromHome #KrtrimaIQ #AIDataEngineering #DataJobsIndia Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job description Basic Responsibilities (Must-Haves): 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines . Hands-on experience with log analytics, user engagement metrics, and product performance metrics . Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization . Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications. Manage and organize large volumes of application log data using Google Big Query . Design and develop interactive dashboards to visualize key metrics and insights using any of the tool like Tableau Power BI , or ThoughtSpot AI . Create intuitive, impactful visualizations to communicate findings to teams including customer success and leadership. Ensure data integrity, consistency, and accessibility for analytical purposes. Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment . Work closely with product teams to understand log data generated by Python-based applications . Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics. Can optimize data pipelines and storage in Big Query . Strong communication and teamwork skills . Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills . Preferred Responsibilities (Nice-to-Haves): Knowledge of Generative AI (GenAI) and LLM-based solutions . Experience in designing and developing dashboards using ThoughtSpot AI . Good exposure to Google Cloud Platform (GCP) . Data engineering experience with modern data warehouse architectures . Additional Responsibilities: Participate in the development of proof-of-concepts (POCs) and pilot projects. Ability to articulate ideas and points of view clearly to the team. Take ownership of data analytics and data engineering solutions . Additional Nice-to-Haves: Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop . Familiarity with Agile development methodologies and version control systems like Git . Familiarity with ETL tools such as Informatica or Azure Data Factory Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

We are seeking a skilled Azure Data Engineer with hands-on experience in modern data engineering tools and platforms within the Azure ecosystem . The ideal candidate will have a strong foundation in data integration, transformation, and migration , along with a passion for working on complex data migration projects . Job Title: Azure Data Engineer Location: Remote Work Timings: 2:00 PM – 11:00 PM IST Please Note: This is a pure Azure-specific role . If your expertise is primarily in AWS or GCP , we kindly request that you do not apply . Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory / Synapse Data Factory to orchestrate and automate data workflows. Build and manage data lakes using Azure Data Lake , enabling secure and scalable storage for structured and unstructured data. Lead and support data migration initiatives (on-prem to cloud, cloud-to-cloud), ensuring minimal disruption and high integrity of data. Perform advanced data transformations using Python , PySpark , and Azure Databricks or Synapse Spark Pools . Develop and optimize SQL / T-SQL queries for data extraction, manipulation, and reporting across Azure SQL services. Design and maintain ETL solutions using SSIS , where applicable. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Monitor, troubleshoot, and continuously improve data workflows to ensure performance, reliability, and scalability. Uphold best practices in data governance, security, and compliance. Required Skills and Qualifications: 2+ years of experience as a Data Engineer, with strong emphasis on Azure technologies. Proven expertise in: Azure Data Factory / Synapse Data Factory Azure Data Lake Azure Databricks / Synapse Spark Python and PySpark SQL / T-SQL SSIS Demonstrated experience in data migration projects and eagerness to take on new migration challenges. Microsoft Certified: Azure Data Engineer Associate certification preferred. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role. To know more about Techolution, visit our website: www.techolution.com If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.To know more about Techolution, visit our website: www.techolution.com About Techolution: Techolution is a next gen AI consulting firm on track to become one of the most admired brands in the world for "AI done right". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. At Techolution, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in "AI Done Right," we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently. We are honored to have recently received the prestigious Inc 500 Best In Business award , a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Our thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, “Failing Fast? Secrets to succeed fast with AI”. Refer here for more details on the content - https://www.luvtulsidas.com/ Let's explore further! Uncover our unique AI accelerators with us: 1. Enterprise LLM Studio : Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes. 2. AppMod. AI : Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands! 3. ComputerVision. AI : Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration! 4. Robotics and Edge Device Fabrication : Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services. 5. RLEF AI Platform : Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI. Some videos you wanna watch! Computer Vision demo at The AI Summit New York 2023 Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology. Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

🚀 Hiring: Python Developer (Remote | Night Shift) – Natlov Technologies Pvt. Ltd. 🌐 www.natlov.com We’re looking for a Python Developer with 1–2 years of experience and solid exposure to HPC environments , Singularity , and DevOps to join our dynamic remote team. 🔧 Tech Stack & Must-Have Skills 1–2 years of hands-on experience in Python development Experience with Singularity containerization Exposure to HPC/ON-PREM systems Familiarity with SLURM for workload management Strong understanding of DevOps & Deployment : Building PIP packages , handling custom libraries CI/CD and automation workflows Working knowledge of GCP (Google Cloud Platform) 💼 Responsibilities Develop and optimize Python applications for HPC environments Build and manage Singularity containers Automate deployments and handle package/libraries Collaborate with DevOps and infrastructure teams for smooth delivery ✅ Preferred Experience with Roche or Persistence projects Background in secure/scalable systems (e.g., research, healthcare domains) 📍 Location: Remote 🕒 Shift Timing: 6:00 PM to 3:00 AM IST (Night Shift) 💼 Experience Required: 1–2 Years 📧 Apply Now: Send your resume to techhr@natlov.com Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Application Security Lead Location: Remote (India-based) Employment Type: Full-Time About Us We are a rapidly growing cybersecurity firm delivering advanced security solutions to enterprises across the Middle East, Europe, and the United States. Our mission is to empower organizations to build and operate secure applications through strategy-driven, risk-based, and modern security practices. We're looking for a seasoned Application Security Leader to lead our global application security initiatives. Role Overview As an Application Security Lead , you will spearhead both the strategic direction and technical execution of application security programs for our clients. You will act as a trusted advisor, shaping security roadmaps, driving secure SDLC adoption, leading architecture reviews, and enabling secure innovation across development teams. Key Responsibilities Strategic Leadership Develop and own enterprise-wide application security strategies tailored to each client’s risk profile and maturity level. Define multi-phase strategic roadmaps aligned with OWASP SAMM, NIST, and ISO 27001 standards. Establish and evolve secure SDLC practices across diverse client environments. Advocate and align AppSec priorities with broader business, DevOps, and GRC goals. Drive metrics-driven governance and periodic maturity assessments to track progress and demonstrate value. Technical Execution Oversee secure code review processes and champion automated testing pipelines (SAST, DAST, SCA, etc.). Integrate security into CI/CD pipelines using tools like Veracode, Checkmarx, Fortify, SonarQube, and GitHub Advanced Security. Design and implement security control and requirements frameworks for web, mobile, API, and cloud-native applications. Guide remediation strategies, perform root cause analysis, and enable development teams to build secure code. Track and report application security KPIs and KRIs for technical and executive stakeholders. Lead application architecture risk analysis, threat modeling, and design review sessions. Customer Engagement Act as the primary interface for customers across the US and Europe for all AppSec-related engagements. Lead strategic workshops and executive presentations, translating technical risk into business context. Deliver high-quality documentation including AppSec policies, strategy decks, and board-level reporting. Requirements Must-Have 10+ years of progressive experience in Application Security, with at least 3 years in a strategic/architect-level role. Deep understanding of security frameworks: OWASP SAMM, OWASP ASVS, STRIDE, PASTA, and NIST 800-53. Hands-on experience with security tools across the SDLC: SAST, DAST, SCA, IAST, RASP. Strong grasp of secure architecture principles, cloud-native security (Azure/AWS/GCP), and API security. Demonstrated ability to lead AppSec strategy development and maturity assessments. Excellent stakeholder management, communication, and leadership skills. Bachelor’s degree in Computer Science, Information Security, or a related field. Preferred Professional certifications such as CSSLP, OSWE, GWAPT, or CISSP. Prior experience working with or advising enterprise clients in the US, Europe, or Middle East. Familiarity with DevSecOps practices, threat intelligence, and regulatory compliance frameworks (e.g., GDPR, HIPAA, PCI-DSS). Working Hours Remote-first with some overlap required for client meetings in Europe and US time zones. Compensation Base salary of 40- 50k dollars plus bonus compensation above market compensation. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

The Opportunity: We are seeking a highly experienced and technically proficient Solution Consultant to join our growing team. In this pivotal role, you will be responsible for translating complex Supply Chain business challenges within the manufacturing industry into innovative Data and AI-driven solutions. You will be a trusted advisor to our clients, bridging the gap between business needs and technical capabilities, and ultimately driving the successful adoption of our cutting-edge solutions. Key Responsibilities: Solution Architecture & Design: Lead the design and architecture of end-to-end Data and AI solutions specifically tailored for Supply Chain use cases in manufacturing. This includes, but is not limited to, areas such as demand forecasting, inventory optimization, production planning, logistics and transportation optimization, supplier risk management, quality control, and predictive maintenance. Discovery & Needs Assessment: Conduct in-depth discovery sessions with clients to understand their current Supply Chain processes, pain points, data landscape, and strategic objectives. Identify opportunities where Data and AI can deliver significant business value. Use Case Definition & Prioritization: Collaborate with clients to define, prioritize, and articulate compelling Data and AI use cases, demonstrating a clear understanding of the ROI and impact on Supply Chain key performance indicators (KPIs). Technical Expertise & Guidance: Provide deep technical expertise in data engineering, machine learning, and AI concepts relevant to Supply Chain. Guide clients on data requirements, integration strategies, model selection, and deployment considerations. Proof-of-Concept (POC) & Pilot Support: Work closely with data science and engineering teams to support the development and demonstration of POCs and pilots, showcasing the capabilities and value of proposed solutions. Presales & Sales Enablement: Partner with sales teams to articulate the value proposition of our Data and AI solutions, deliver compelling presentations and demonstrations, and respond to technical questions during the sales cycle. Industry & Domain Expertise: Leverage a strong understanding of manufacturing industry dynamics, common Supply Chain challenges, and relevant industry standards (e.g., SCM, ERP systems, Industry 4.0). Stakeholder Management: Build strong relationships with various stakeholders, including business leaders, IT teams, data scientists, and engineers, at all levels of client organizations. Market Insights: Stay abreast of the latest trends, technologies, and best practices in Data, AI, and Supply Chain management within the manufacturing sector. Content Creation: Contribute to the development of solution accelerators, whitepapers, presentations, and other collateral that articulate our value proposition. Qualifications: Educational Background: Bachelor's or Master's degree in Computer Science, Data Science, Industrial Engineering, Supply Chain Management, or a related quantitative field. Experience: Minimum of 7+ years of experience in a Solution Consultant, Solution Architect, or similar client-facing role. Proven track record of architecting and delivering successful Data and AI solutions for Supply Chain business functions. Strong domain expertise in the manufacturing industry vertical , with a deep understanding of its unique Supply Chain complexities. Hands-on experience with various stages of the data and AI lifecycle, from data ingestion and transformation to model development, deployment, and monitoring. Technical Skills: Proficiency in data technologies (e.g., SQL, NoSQL databases, data warehousing, data lakes). Experience with cloud platforms (e.g., AWS, Azure, GCP) and their respective AI/ML services. Familiarity with programming languages commonly used in data science (e.g., Python, R). Understanding of machine learning algorithms and statistical modeling techniques relevant to forecasting, optimization, and classification. Knowledge of data visualization tools (e.g., Tableau, Power BI, Qlik Sense). Domain Knowledge: In-depth understanding of core Supply Chain processes in manufacturing (e.g., S&OP, demand planning, inventory management, production scheduling, logistics). Familiarity with common manufacturing systems (e.g., ERP, MES, APS). Soft Skills: Exceptional communication, presentation, and interpersonal skills with the ability to articulate complex technical concepts to non-technical audiences. Strong analytical and problem-solving abilities. Ability to work independently and as part of a collaborative team. Client-focused mindset with a passion for driving business outcomes. Strong business acumen and the ability to connect technology solutions to business value. Bonus Points If You Have: Experience with specific Supply Chain planning or optimization software. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Azure AI Engineer). Experience with MLOps practices and tools. Prior experience working in a consulting environment. Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

100% Remote Role Permanent position Job Title: Vice President (IT) - L&A insurance Required: Proven track record (15+ years) leading large-scale Enterprise IT Services and/or SaaS Support teams Deep expertise in L&A (Life and Annuity) insurance – You understand the industry’s pain points and know how to solve them at scale P&L ownership experience – You have led multi-million-dollar services operations, driving revenue and cost optimization Tech-savvy strategist – You know how to leverage AI, automation, DevOps, and ITIL best practices to modernize approaches to Service delivery Resilient, high-energy leader – You set the bar high and lead by example, inspiring teams to own the mission and execute with urgency Bachelor’s degree (or global equivalent) in Technology, Business Administration, Management, or a related field (Master’s degree preferred). Willingness to travel as needed and to work closely with teams in the office on a regular basis Preferred Technical Qualifications: Expertise in cloud platforms (AWS, Azure, GCP) and enterprise tools like ServiceNow, JIRA, Salesforce. Proficiency in Containers and CI/CD pipelines (Jenkins, GitHub Actions) Exposure to AI-driven automation in customer service and DevOps transformation. Expertise in Scaled Agile (SAFe) and Agile Service Management. Job Description: Reporting to the Senior Vice President of Services, client is seeking a Vice President of Services responsible for leading, developing, and empowering high-performing Services teams in India. You will focus on enhancing productivity, breaking down obstacles that prevent the team from delivering optimal results, and creating an environment that fosters growth and efficiency. Your leadership and technical competence will drive improvements in service delivery, reduce friction in operations, and cultivate a culture where team members are set up for success. Key Responsibilities Team Development & Growth: Lead, mentor, and coach a diverse team of technical Services professionals, promoting a culture of continuous learning, collaboration, and on-time delivery. Identify skill gaps and provide development opportunities, training, and resources to enhance individual and team performance. Develop clear career progression paths and ensure that team members have the tools and support needed to achieve their professional goals. Boosting Productivity & Operational Efficiency: Identify and eliminate roadblocks that hinder team productivity, ensuring that the team can focus on high-priority tasks. Streamline processes, workflows, and systems to maximize service efficiency while maintaining high-quality standards. Drive the use of tools and technologies that improve day-to-day service delivery, automate repetitive tasks, and reduce manual work. Removing Roadblocks: Proactively assess the work environment, identify areas of friction, and implement solutions that help the team work more effectively. Collaborate within the broader Services organization and with other departments, such as Global Support, Product, and Sales to remove any operational or structural barriers that may hinder the team’s ability to deliver services efficiently. Take a hands-on approach to problem-solving, whether it’s addressing resource allocation, resolving technical challenges, or improving communication. Performance Monitoring & Continuous Improvement: Establish and monitor key performance indicators (KPIs) to track team productivity, service delivery quality, and customer satisfaction. Continuously assess and optimize team workflows, ensuring that projects are executed on time, within scope, and within budget. Lead regular feedback sessions with team members to identify opportunities for process improvements and increase team satisfaction. Fostering Collaboration & Empowerment: Create a supportive environment where team members are encouraged to share ideas, collaborate, and innovate in how they approach service delivery. Encourage cross-functional collaboration, ensuring that the services team works seamlessly with other departments to achieve shared goals. Empower team members to make decisions, solve problems independently, and take ownership of their work. Client-Centric Leadership: Maintain a focus on delivering value to customers, ensuring that service delivery is aligned with client expectations. Work closely across the entire Services leadership team, including with Program Managers handling the day-to-day client interactions and implementations, to ensure that the technical teams are delivering the expected value. Monitor customer feedback and collaborate with the team to implement changes that improve the client experience. Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Vice President (L&A insurance) Experience: 15+ years Location: Remote Our client believes in connecting people and business to Insurance in ways that are Innovative, Hyper-Relevant, Compelling and Personal. They bring together the brightest minds to build the future of Insurance; a world where Insurance makes life and business easier, more connected, and better protected. About the Role Vice President of Services responsible for leading, developing, and empowering high-performing Services teams in India. You will focus on enhancing productivity, breaking down obstacles that prevent the team from delivering optimal results, and creating an environment that fosters growth and efficiency. Your leadership and technical competence will drive improvements in service delivery, reduce friction in operations, and cultivate a culture where team members are set up for success. Qualifications: Proven track record (15+ years) leading large-scale Enterprise IT Services and/or SaaS Support teams Deep expertise in L&A insurance – You understand the industry’s pain points and know how to solve them at scale P&L ownership experience – You have led multi-million-dollar services operations, driving revenue and cost optimization Tech-savvy strategist – You know how to leverage AI, automation, DevOps, and ITIL best practices to modernize approaches to Service delivery Resilient, high-energy leader – You set the bar high and lead by example, inspiring teams to own the mission and execute with urgency Bachelor’s degree (or global equivalent) in Technology, Business Administration, Management, or a related field (Master’s degree preferred). Willingness to travel as needed and to work closely with teams in the office on a regular basis Preferred Technical Qualifications: Expertise in cloud platforms (AWS, Azure, GCP) and enterprise tools like ServiceNow, JIRA, Salesforce. Proficiency in Containers and CI/CD pipelines (Jenkins, GitHub Actions) Exposure to AI-driven automation in customer service and DevOps transformation. Expertise in Scaled Agile (SAFe) and Agile Service Management. TribolaTech Founded in 2009, TribolaTech specializes in providing Information Technology Solutions and Outsourcing Services. Our executive teams have over 5 decades of combined experience in IT Consulting, Data Management and Staff Augmentation. We love technology and are proud to build a world class global company. TribolaTech is committed to delivering quality solutions that provide exceptional value, innovation, assurance, and integrity to our customers. With deep industry and business process expertise, comprehensive resources and a proven track record, TribolaTech can mobilize the right people, process and technologies to help clients improve their business. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

We’re looking for a Technical Delivery Manager with a strong mix of data engineering expertise and delivery planning skills . What You’ll Do: Own backlog grooming, requirement translation, and delivery planning. Triage data platform tickets and work with L2/L3 support teams. Lead agile delivery teams and ensure technical alignment. Collaborate closely with stakeholders for smooth delivery. Must-Haves: 5+ years in technical product/project management within data platforms. Experience handling agile teams and managing end-to-end data initiatives. Hands-on with Spark, Delta Lake, Databricks , and one cloud platform ( Azure, AWS, or GCP ). Strong communication and planning skills. Proficient in Jira, Confluence. 📌 Bachelor’s degree in Computer Science or equivalent. 👥 Prior experience in team handling is essential. Be part of shaping the future of data. Let’s connect! Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

India

Remote

Linkedin logo

Work Schedule Standard (Mon-Fri) Environmental Conditions Office Ensures that required essential documents are complete and in place, according to ICH-GCP and applicable regulations. Conducts on-site file reviews as per project specifications. Provides trial status tracking and progress update reports to the team as required. Ensures study systems are complete, accurate and updated per agreed study conventions (e.g. Clinical Trial Management System). Facilitates effective communication between investigative sites, client company and internal project teams through written, oral and/or electronic contacts. Responds to company, client and applicable regulatory requirements/audits/inspections. Participates in the investigator payment process. Ensures a shared responsibility with other project team members on issues/findings resolution. Investigates and follow-up on findings as applicable. Participates in investigator meetings as vital. May help to identify potential investigators in collaboration with the client company to ensure the acceptability of qualified investigative sites. Initiates clinical trial sites according to relevant procedures to ensure compliance with the protocol and regulatory and ICH GCP obligations, making recommendations where warranted. Performs trial close out and retrieval of trial materials. Maintains and completes administrative tasks such as expense reports and timesheets in an accurate and timely manner. Contributes to the project team by assisting in preparation of project publications/tools and sharing ideas/suggestions with team members. Contributes to other project work and initiatives for process improvement, as required. Monitors investigator sites with a risk-based monitoring approach: applies root cause analysis (RCA), critical thinking and problem-solving skills to identify site processes failure and corrective/preventive actions to bring the site into compliance and decrease risks. Ensures data accuracy through SDR, SDV and CRF review as applicable through on-site and remote monitoring activities. Assess investigational product through physical inventory and records review. Documents observations in reports and letters using approved business writing standards. Raises observed deficiencies and issues to clinical management expeditiously and follow all issues through to resolution. May need to maintain regular contact between monitoring visits with investigative sites to confirm that the protocol is being followed, that previously identified issues are being resolved and that the data is being recorded in a timely manner. Conducts monitoring tasks in accordance with the approved monitoring plan. Qualification: Must be Life Science Graduate Having Onsite Monitoring experience up to 1 year is preferred Should be willing to Travel Should have good knowledge on ICG GCP Guidelines Willing to Join us immediately Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Job Description It is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Summary Database Engineer/ Developer - Core Skills Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. A good understanding of data security measures and compliance is also required. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes, and knowledge of cloud-based databases like AWS RDS and Google BigQuery. Min 5 years of experience. JD Database Engineer - Data Research Engineering Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Technical Analyst (JAVA) Experience: 8+ Years Location: Gurugram (Onsite) Employment Type: Full-Time Job Summary: We are seeking an experienced Technical Analyst with a strong foundation in Java development , microservices , and modern cloud-native architecture . The ideal candidate should have hands-on experience in Spring Boot , REST APIs , and distributed systems , along with a strong understanding of front-end frameworks and DevOps practices . A forward-thinking mindset with a willingness to adopt AI-assisted development tools is highly valued. Key Responsibilities: Analyze and design scalable backend services using Java (11+) , Spring Boot , and related technologies Collaborate with cross-functional teams to translate business requirements into technical solutions Contribute to microservices architecture and ensure integration with APIs and external systems Implement and maintain solutions using JPA , Hibernate , MS-SQL , and PostgreSQL Develop and maintain frontend components using React or Angular , along with HTML , CSS3 , or Tailwind CSS Utilize CI/CD tools (Jenkins, GitLab CI, GitHub Actions) for automated build, test, and deployment workflows Work within cloud environments (preferably AWS; Azure/GCP acceptable) and container orchestration tools like Kubernetes (EKS/AKS/GKE) Apply Domain-Driven Design (DDD) principles and implement Backend-for-Frontend (BFF) patterns where applicable Collaborate with stakeholders to design, document, and support robust, secure, and maintainable systems AI Integration Responsibilities: Use AI tools (e.g., GitHub Copilot , OpenAI Codex , Gemini ) for code generation , unit testing , and documentation Explore and evaluate new AI-assisted development technologies and frameworks Collaborate with AI agents to optimize , refactor , and streamline code Contribute to integrating AI tools across the SDLC for enhanced productivity Required Skills: 8+ years of Java backend development (Java 11+) Strong experience with Spring Framework , Spring Boot , and REST APIs Expertise in microservices , event-driven architecture , and distributed systems Proficient with JPA , Hibernate , MS-SQL , PostgreSQL Familiar with CI/CD pipelines and version control (Jenkins, GitLab CI, GitHub Actions, Git) Cloud exposure: AWS preferred; Azure/GCP also acceptable Frontend knowledge of React or Angular , HTML, CSS3/Tailwind Experience working with Kubernetes (EKS, AKS, GKE) Strong grasp of Agile/Scrum methodologies and practices Excellent problem-solving, communication, and documentation skills Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

Key Responsibilities Once 3month Training Is Completed Collaborate and work with multiple teams across geographies. Troubleshoot infrastructure issues and provide 24/7 support coverage. Document solutions and create knowledge base articles. Automate deployments utilizing custom templates and modules for customer environments on AWS. Create automation tools and processes to improve day to day functions. Ensure the control, integrity, and accessibility of the cloud environment for the enterprise Lead Workload/Workforce Management and Optimization related tasks. Provide technical expertise as when required Technical Expertise Basic programming/scripting knowledge in Python, Shell, Bash, Powershell, etc. Understanding of networking concepts (TCP/IP, DNS, HTTP/HTTPS). Basic knowledge of OS(Linux/Windows). Familiarity with databases (SQL concepts and basic queries) Understanding of web technologies (REST APIs, JSON, HTML/CSS) Awareness of any cloud technology platform (AWS/GCP/Azure). Knowledge of version control systems (Git/GitHub). Skills Passionate about technology and has a desire to constantly expand technical knowledge. Detail-oriented in documenting information and able to own customer issues through resolution. Able to handle multiple tasks and prioritize work under pressure. Demonstrate sound problem-solving skills coupled with a desire to take on responsibility. Strong written and verbal communication skills, both highly technical and non-technical. Ability to communicate technical issues to nontechnical and technical audiences. Team collaboration skills for cross-functional projects Candidate Qualifying Criteria(Mandatory Requirements) Education Background: BE/B.Tech(CS/IT Only) from a reputed university. Only 2024 and 2025 pass out. CGPA: 7.5+ Location: Candidate must be based out of Delhi/NCR only. Certifications: Any IT certification will be an added advantage Experience 0-6 Months IT Experience Physical Demands May require work on non-traditional shifts. Should be able to work in 24*7 environments. A willingness to work weekends and/or holidays when required as the business dictates. During Training Period 3 days in office training and 2 days remote Foundation & Associate level Certifications will be completed in 3rd Month About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know. Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title - Head of AI Job Location - Bengaluru or Gurugram (Hybrid) Education - Must be from IIT/NIT or Any other Tier 1 Institutes EXp- 15+ Years Role Overview: As AI leader, you will lead a talented team of engineers and data scientists to develop and deliver a world-class AI-driven enterprise search solution, AI Agentic platform, recommendations and other key AI initiatives. You will provide strategic direction, foster innovation, and ensure the successful execution of our product roadmap. You should have a deep understanding of AI/ML technologies, a proven track record of leading successful AI products, and a passion for pushing the boundaries of what's possible. Key Responsibilities: Build, mentor, and lead a high-performing team of engineers and data scientists. Foster a collaborative, innovative environment and ensure the team stays insulated from external distractions. Drive the end-to-end product lifecycle from ideation and design to implementation and deployment, ensuring delivery excellence with a focus on quality, speed, and innovation. Provide deep technical guidance in AI, machine learning, NLP, and search technologies, staying current with cutting-edge advancements. Champion AI ethics and responsible development: Ensure that AI projects are developed and deployed ethically and responsibly, considering potential biases and societal impacts. Effectively communicate and collaborate with cross-functional stakeholders, confidently advocating for the team’s priorities and managing external expectations. Collaborate with cross-functional teams: Work closely with product, engineering, marketing, and other teams to integrate AI solutions into existing workflows and develop new AI-powered products and features. Actively consider product-market fit, customer value, and revenue implications, adopting a founder-like approach to growing and refining the product feature. Independently make key decisions and take ownership of the product’s success, proactively addressing challenges and opportunities. Insulate the team from external noise, ensuring they maintain clear focus and direction. Experience & Skills: Proven track record of successfully leading and managing AI and search-related product development. Demonstrated hands-on expertise and deep expertise in Artificial Intelligence, Machine Learning, NLP, Information Retrieval, computer vision, reinforcement learning and enterprise search technologies. Strong understanding of enterprise search technologies and architectures. Proven track record in building, scaling, and managing AI-powered products. Excellent leadership, communication, interpersonal, problem-solving, and analytical skills. Proven ability to articulate a clear vision and align teams behind that vision. Strong experience in strategic planning and execution within agile environments. Demonstrated resilience, adaptability, and ability to thrive in fast-paced, evolving environments. Ability to professionally and effectively push back against stakeholder demands. Founder mentality with the ability to create, articulate, and execute a compelling vision and strategy. Qualifications: Bachelor's degree in Computer Science, Engineering, AI, Data Science, or related fields. Advanced degree (Master's or PhD) preferred. Bonus Points: Publications in top AI conferences or journals. Experience with cloud-based AI platforms (AWS, Azure, GCP). Show more Show less

Posted 1 day ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Freshworks makes it fast and easy for businesses to delight their customers and employees. We do this by taking a fresh approach to building and delivering software that is affordable, quick to implement, and designed for the end user. Headquartered in San Mateo, California, Freshworks has a global team operating from 13 global locations to serve more than 65,000 companies -- from startups to public companies – that rely on Freshworks software-as-a-service to enable a better customer experience (CRM, CX) and employee experience (ITSM). Freshworks’ cloud-based software suite includes Freshdesk (omni-channel customer support), Freshsales (sales automation), Freshmarketer (marketing automation), Freshservice (IT service desk), Freshchat (AI-powered bots), supported by Neo, our underlying platform of shared services. Job Description The Role As a Demo Engineer, you are the technical powerhouse and product storyteller of our sales team. You will be instrumental in achieving our revenue goals by providing exceptional technical and product expertise to our prospective customers. You will be responsible for understanding a prospect's business challenges and delivering compelling, customized product demonstrations that clearly articulate the value and ROI of our solution. This is a critical role that bridges the gap between our sales team and our product, requiring a unique blend of technical acumen, business sense, and outstanding communication skills. Key Responsibilities: Collaborate with Account Executives: Work alongside the sales team to strategize on account pursuits, understand customer needs, and prepare for prospect meetings. Lead Technical Discovery: Engage with prospects to uncover their technical and business requirements, identifying key pain points and opportunities where our platform can provide value. Deliver World-Class Demonstrations: Design and deliver engaging, value-driven product demonstrations to audiences ranging from technical staff to C-level executives. Build Custom Demo Environments: Configure and customize the demo environment with prospect-specific data and workflows to create a personalized and impactful experience. Act as the Product Expert: Serve as the primary technical point of contact for prospects, answering in-depth questions about product features, architecture, security, and integrations. Handle Technical Objections: Expertly address and overcome technical objections from prospects throughout the sales cycle. Support RFPs/RFIs: Provide detailed and accurate written responses for the technical components of RFPs (Request for Proposal) and RFIs (Request for Information). Be the Voice of the Customer: Act as a key liaison between the field and our Product/Engineering teams, channeling customer feedback to help shape the future of our product roadmap. Stay Ahead of the Curve: Continuously learn and maintain expert-level knowledge of our product, the competitive landscape, and industry trends. Qualifications What We're Looking For: Required Qualifications: 3-6 years of experience in a pre-sales, sales engineering, solutions consulting, or a similar customer-facing technical role, preferably within a B2B SaaS company. Proven ability to understand complex business problems and map them to technical solutions. Exceptional presentation and communication skills, with the ability to tell a compelling story and articulate technical concepts clearly to both technical and non-technical audiences. A natural curiosity and a passion for technology and problem-solving. Ability to manage multiple projects simultaneously in a fast-paced environment. Self-motivated, proactive, and able to work effectively in a collaborative team setting. Preferred Qualifications (Nice to Have): Experience with scripting languages (e.g., Python, JavaScript) for demo customization. Hands-on experience with REST APIs, webhooks, and common integration patterns. Familiarity with cloud platforms (AWS, Azure, GCP) and modern enterprise IT architecture. Experience working with global customers across different time zones and cultures. Knowledge of the [Your Industry Vertical] industry. Additional Information Skills Inventory: Demo Engineer I. Technical Acumen Product Knowledge: Demonstrates a deep understanding of the platform’s features, use cases, and limitations. Demo Environment Management: Shows the ability to set up, customize, and troubleshoot the standard demo environment. Scripting & Customization: Possesses the ability to write light scripts (e.g., using Python or JavaScript) to tailor demos or showcase integrations. API & Integrations: Can clearly explain and demonstrate how our APIs (e.g., REST) work and connect with other third-party systems. Cloud & Infrastructure Literacy: Understands basic concepts of cloud hosting (AWS/Azure/GCP), security principles, and data residency. Database Fundamentals: Has the ability to use basic queries (e.g., SQL) to manipulate data within the demo environment to make it relevant for prospects. II. Sales & Business Acumen Discovery & Qualification: Asks insightful questions to effectively uncover prospect pain points, budget, authority, and timelines. Value-Based Storytelling: Consistently connects product features back to a specific business value or ROI for the prospect. Objection Handling: Effectively addresses and reframes technical and business-related objections from prospective customers. Competitive Analysis: Understands key competitors in the market and can clearly articulate our unique differentiators. Needs Analysis: Demonstrates the ability to accurately map complex customer requirements to the platform's capabilities. III. Communication Skills Presentation & Demonstration Delivery: Presents with confidence, clarity, and energy, while effectively pacing the demo to engage the audience. Active Listening: Genuinely listens to the prospect's needs and challenges before formulating a response. Explaining Complex Concepts Simply: Can distill highly technical topics into simple, digestible terms for non-technical stakeholders. Written Communication: Writes clear, concise, and professional emails, RFP responses, and follow-up documentation. Internal Collaboration: Works effectively and builds strong relationships with Account Executives, Product, Marketing, and Engineering teams. IV. Personal Attributes Problem-Solving: Thinks on their feet to creatively solve unexpected issues or questions during live demonstrations. Curiosity: Shows a strong and genuine desire to learn about the customer's business, our product, and new technologies. Composure Under Pressure: Stays calm and professional when facing tough questions or technical difficulties. Proactiveness / Self-Starter: Manages their own schedule and workload effectively without needing constant supervision. Customer Empathy: Genuinely seeks to understand and is driven to solve the customer's core problems. At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Jubilant Pharma Limited is a global integrated pharmaceutical company offering a wide range of products and services to its customers across geographies. We organise our business into two segments, namely, Specialty Pharmaceuticals, comprising Radiopharmaceuticals (including Radio pharmacies), Contract Manufacturing of Sterile Injectable, Non-sterile & Allergy Therapy Products, and Generics & APIs, comprising Solid Dosage Formulations & Active Pharmaceutical Ingredients. Jubilant Generics (JGL) is a wholly - owned subsidiary of Jubilant Pharma. JGL in India has Research & Development units at Noida and Mysore. It has two manufacturing facilities one at Mysore, Karnataka and another at Roorkee, Uttarakhand, engaged in APIs and Dosage manufacturing, respectively. The manufacturing location at Mysore is spread over 69 acres and it’s a USFDA approved site engaged in manufacturing of APIs, and caters to the sales worldwide. API portfolio focusses on Lifestyle driven Therapeutic Areas (CVS, CNS) and targets complex and newly approved molecules. The company is the market leader in four APIs and is amongst the top three players for another three APIs in its portfolio helping it maintain a high contribution margin. The manufacturing location at Roorkee, Uttarakhand is state of the art facility and is audited and approved by USFDA, Japan PMDA, UK MHRA, TGA, WHO and Brazil ANVISA. This business focusses on B2B model for EU, Canada and emerging markets. Both manufacturing units are backward- integrated and are supported by around 500 research and development professionals based at Noida and Mysore. R&D works on Development of new products in API, Solid Dosage Formulations of Oral Solid, Sterile Injectable, Semi-Solids Ointments, Creams and Liquids. All BA/BE studies are done In house at our 80 Bed facility which is inspected and having approvals /certifications from The Drugs Controller General (India) and has global regulatory accreditations including USFDA, EMEA, ANVISA (Brazil), INFRAMED (Portugal Authority), NPRA(Malaysia), AGES MEA (Austria) for GCP and NABL, CAP accreditations for Path lab services. JGL’s full-fledged Regulatory Affairs & IPR professionals ensures unique portfolio of patents and product filings in regulatory and non-regulatory market. Revenue of Jubilant Pharma is constantly increasing and during the Financial Year 2018 -19 it was INR 53,240 Million as compared to INR 39,950 Million during the Financial Year 2017-18. Kindly refer www.jubilantpharma.com for more information about organization. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Key Responsibilities JOB DESCRIPTION Cloud Infrastructure Management Design, implement, and maintain cloud-based infrastructure on GCP. Monitor and optimize the performance, scalability, and reliability of the cloud environment. Networking Configure and manage virtual private cloud (VPC) networks, subnets, firewalls, and VPNs. Implement and maintain load balancers, cloud CDN, and hybrid connectivity solutions. Ensure secure and efficient network communication within GCP and with on-premises data centers. System Administration Manage and maintain cloud-based servers/applications, storage, and databases. Perform system updates, patches, and backups. Monitor system performance and troubleshoot issues as they arise. Security And Compliance Implement and enforce security best practices and compliance standards. Manage identity and access management (IAM) roles and permissions. Conduct regular security audits and vulnerability assessments. Automation And Scripting Develop and maintain automation scripts for deployment, configuration, and management tasks. Utilize Infrastructure as Code (IaC) tools such as Terraform or Cloud Deployment Manager. Documentation And Training Create and maintain comprehensive documentation for system configurations, processes, and procedures. Provide training and support to team members and stakeholders on GCP networking and system administration. Responsibilities Preferred Skills: Experience with hybrid cloud environments and multi-cloud strategies. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Familiarity with monitoring and logging tools such as Stackdriver, Prometheus, or Grafana. Strong communication and interpersonal skills. Qualifications QUALIFICATIONS Bachelor's degree in Computer Science, Information Technology, or related field. Minimum of 5 years of experience in system administration with a focus on cloud platforms, preferably GCP. Extensive knowledge of GCP networking components and patterns, including VPCs, subnets, firewalls, VPNs, and load balancers etc. Proficiency in cloud automation and scripting languages such as Python, Bash, or PowerShell. Experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or Cloud Deployment Manager. Strong understanding of security best practices and compliance requirements. Excellent problem-solving skills and the ability to work independently and as part of a team. GCP certifications such as Professional Cloud Network Engineer or Professional Cloud Architect are highly desirable. About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Senior Full Stack Developer (.NET + Angular) Hybrid | Delhi, India | Product Development | Work with Swedish Team We are looking for a Senior Full Stack Developer with strong technical expertise and strategic thinking to join our hybrid team in Delhi. You’ll be working closely with a Swedish product team on a modern microservices architecture. The ideal candidate will be communicative, proactive, and experienced in both backend and frontend technologies, especially within a product development environment. What is the Role As a Full Stack Developer, you’ll be involved in building, maintaining, and scaling our core systems using .NET (C#) for backend and Angular (TypeScript) for frontend. You’ll collaborate with international team members, contribute to architectural decisions, and ensure a high standard of code quality and product performance. What You Will Be Doing ● Develop and maintain microservices using .NET (C#) ● Build user interfaces and frontend logic using Angular (TypeScript) ● Work in Google Cloud Platform (GCP) environments ● Use GitHub for version control and Confluence for documentation ● Collaborate with a Swedish-based product team ● Solve complex technical problems and propose scalable solutions ● Participate in strategic discussions related to product roadmap and architecture ● Guide junior team members and foster a collaborative work culture What You Bring ● 5+ years of experience as a Full Stack Developer ● Strong backend experience in .NET (C#) and frontend in Angular (TypeScript) ● Familiarity with Google Cloud Platform (GCP) and CI/CD practices ● Proficient in GitHub and Confluence ● Excellent communication skills – both verbal and written ● Strong problem-solving mindset and ability to think strategically ● Tea3m player with the ability to work cross-functionally ● Experience in hybrid work environments Bonus Points ● Prior experience working with Swedish or international teams ● Background in product-based companies ● Familiarity with microservices architecture at scale ● Experience in mentoring or technical leadership Additional Information – We Take Care of Our People ● Location: Hybrid role (3 days a week from Delhi office) ● Timings: Flexible, with overlap to collaborate with Sweden team ● Culture: Global, product-first, and collaborative Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Data Scientist Experience: 6-10 years Location: Noida Contract duration: 6 months + extendable Responsibility : Model Development: Design and implement ML models to tackle complex business challenges. Data Preprocessing: Clean, preprocess, and analyze large datasets for meaningful insights and model features. Model Training: Train and fine-tune ML models using various techniques, including deep learning and ensemble methods. Evaluation and Optimization: Assess model performance, optimize for accuracy, efficiency, and scalability. Deployment: Deploy ML models in production, monitor performance for reliability. Collaboration: Work with data scientists, engineers, and stakeholders to integrate ML solutions. Research: Stay updated on ML/AI advancements, and contribute to internal knowledge. Documentation: Maintain comprehensive documentation for all ML models and processes. • Qualification - Bachelor's or master’s in computer science, Machine Learning, Data Science, or a related field, and must be experience of 6-10 years. • Desirable Skills: Must Have 1. Experience in time series forecasting, regression Model, Classification Model 2. Python, R, Data analysis 3. Large-scale data handling with Pandas, Numpy, and Matplotlib 4. Version Control: Git or any other 5. ML Framework: Hands-on exp in Tensorflow, Pytorch, Scikit-Learn, Keras 6. Good knowledge on Cloud platform and ( AWS/Azure/ GCP), Docker, Kubernetes 7. Model Selection, evaluation, Deployment, Data collection, and preprocessing, Feature engineering Estimation Good to Have Experience with Big Data and analytics using technologies like Hadoop, Spark, etc. Additional experience or knowledge in AI/ML technologies beyond the mentioned frameworks. BFSI and banking domain Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

Senior Python Developer- A Senior Python Developer is a specialized software engineer with advanced expertise in developing scalable, efficient, and robust applications using the Python programming language. They have a deep understanding of Python frameworks such as Django, Flask, or FastAPI, and possess hands-on experience with RESTful API development, database integration, and cloud-based solutions. They are proficient in applying software engineering best practices, including code optimization, testing, and CI/CD. Their responsibilities often extend to architectural planning, mentoring junior developers, and ensuring the overall technical quality of the project. Contributions of a Senior Python Developer A Senior Python Developer plays a key role throughout the software development lifecycle, from architecture planning to deployment and maintenance. Their contributions include: ·Designing and implementing scalable backend architectures ·Developing RESTful APIs and microservices ·Ensuring integration with third-party services and APIs ·Optimizing databases for performance and scalability ·Leading code reviews, setting coding standards, and mentoring junior developers ·Collaborating on DevOps practices to support CI/CD pipelines Expectations for a Senior Python Developer · Proficiency in Python: Expertise in Python (3.x), with experience in one or more frameworks like Django, Flask, or FastAPI. · Backend Development: Design, develop, and maintain scalable server-side logic with a strong focus on security and performance. · Database Management: Experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis) for data storage and retrieval. · API Development & Integration : Build and integrate robust RESTful APIs; familiarity with GraphQL is a plus. · Testing and Debugging: Write unit, integration, and functional tests to ensure code quality. Familiarity with Pytest or similar frameworks. · System Architecture Planning: Ability to design and propose system architectures, microservices structures, and deployment strategies based on project requirements and scalability needs. · Version Control: Proficiency with Git workflows (feature branching, pull requests, resolving merge conflicts), and familiarity with platforms like GitHub, GitLab, or Bitbucket. · Security Best Practices: Apply secure coding principles, handle authentication/authorization (OAuth2, JWT), and protect applications against common vulnerabilities (OWASP Top 10) Capabilities of a Senior Python Developer · Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. · Cloud & DevOps Proficiency: Practical experience with cloud platforms (AWS, GCP, Azure), containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines for continuous software delivery. · Software Development Lifecycle (SDLC): Strong understanding of the entire SDLC, from requirement analysis and system design to implementation, testing, deployment, and maintenance. · Proven Experience: Minimum 4+ years of demonstrable experience in Python development, with a portfolio of completed projects showcasing backend system designs and API development. Benefits of joining Atologist Infotech 👉 Paid Leaves 👉 Leave Encashment 👉 Friendly Leave Policy 👉 5 Days Working 👉 Festivals Celebrations 👉 Friendly Environment 👉 Lucrative Salary packages 👉 Paid Sick Leave 👉 Diwali Vacation 👉 Annual Big Tour 👉 Festive Off If the above requirements suit your interest, please call us on +91 9909166110 or send your resume to hr@atologistinfotech.com Show more Show less

Posted 1 day ago

Apply

Exploring GCP Jobs in India

The job market for Google Cloud Platform (GCP) professionals in India is rapidly growing as more and more companies are moving towards cloud-based solutions. GCP offers a wide range of services and tools that help businesses in managing their infrastructure, data, and applications in the cloud. This has created a high demand for skilled professionals who can work with GCP effectively.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for GCP professionals in India varies based on experience and job role. Entry-level positions can expect a salary range of INR 5-8 lakhs per annum, while experienced professionals can earn anywhere from INR 12-25 lakhs per annum.

Career Path

Typically, a career in GCP progresses from a Junior Developer to a Senior Developer, then to a Tech Lead position. As professionals gain more experience and expertise in GCP, they can move into roles such as Cloud Architect, Cloud Consultant, or Cloud Engineer.

Related Skills

In addition to GCP, professionals in this field are often expected to have skills in: - Cloud computing concepts - Programming languages such as Python, Java, or Go - DevOps tools and practices - Networking and security concepts - Data analytics and machine learning

Interview Questions

  • What is Google Cloud Platform and its key services? (basic)
  • Explain the difference between Google Cloud Storage and Google Cloud Bigtable. (medium)
  • How would you optimize costs in Google Cloud Platform? (medium)
  • Describe a project where you implemented CI/CD pipelines in GCP. (advanced)
  • How does Google Cloud Pub/Sub work and when would you use it? (medium)
  • What is Cloud Spanner and how is it different from other database services in GCP? (advanced)
  • Explain the concept of IAM and how it is implemented in GCP. (medium)
  • How would you securely transfer data between different regions in GCP? (advanced)
  • What is Google Kubernetes Engine (GKE) and how does it simplify container management? (medium)
  • Describe a scenario where you used Google Cloud Functions in a project. (advanced)
  • How do you monitor performance and troubleshoot issues in GCP? (medium)
  • What is Google Cloud SQL and when would you choose it over other database options? (medium)
  • Explain the concept of VPC (Virtual Private Cloud) in GCP. (basic)
  • How do you ensure data security and compliance in GCP? (medium)
  • Describe a project where you integrated Google Cloud AI services. (advanced)
  • What is the difference between Google Cloud CDN and Google Cloud Load Balancing? (medium)
  • How do you handle disaster recovery and backups in GCP? (medium)
  • Explain the concept of auto-scaling in GCP and when it is useful. (medium)
  • How would you set up a multi-region deployment in GCP for high availability? (advanced)
  • Describe a project where you used Google Cloud Dataflow for data processing. (advanced)
  • What are the best practices for optimizing performance in Google Cloud Platform? (medium)
  • How do you manage access control and permissions in GCP? (medium)
  • Explain the concept of serverless computing and how it is implemented in GCP. (medium)
  • What is the difference between Google Cloud Identity and Access Management (IAM) and AWS IAM? (advanced)
  • How do you ensure data encryption at rest and in transit in GCP? (medium)

Closing Remark

As the demand for GCP professionals continues to rise in India, now is the perfect time to upskill and pursue a career in this field. By mastering GCP and related skills, you can unlock numerous opportunities and build a successful career in cloud computing. Prepare well, showcase your expertise confidently, and land your dream job in the thriving GCP job market in India.

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies