Home
Jobs

15888 Gcp Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

Linkedin logo

Job Description Would you enjoy working with large distributed systems ? Do you enjoy using your expertise to mentor others ? Join our cutting-edge Web Security Team Our team develops and sells Akamai's carrier network security products to fixed and mobile network service providers. We specialize in delivering highly scalable network infrastructure and access-based security products to our customers. We collaborate with product groups, external partners to enable our customers to provide high-quality, secure internet and network access to their end-users. Develop secure, reliable and maintainable code As Software Engineer.Senior II, you'll develop software that runs one of the largest distributed-systems in the world. You will play a key role in our growth strategy. You'll be creating innovative solutions for our network challenges and clients, with the aim of increasing internet traffic and making it faster, more reliable and secure. As a Software Engineer Senior II, you will be responsible for: Collaborating with other software engineering teams to influence and shape the design process and product development Creating and enhancing new features and functionality, including conception, design, testing and deployment Being the subject matter and team expert, leading and mentoring others to optimize their productivity Working with internal teams to troubleshoot complex problems in our network for our customers Do What You Love To be successful in this role you will: Have experience of software development life cycles and writing code in Golang/C/C++ within a Unix/Linux environment Have understanding of how internet and it's networks/protocols (IP, DNS, routing, HTTP, TCP or web architecture) works Demonstrate a keen interest in learning new technologies Have ownership on product development and enhancement Be a natural communicator with an interest in mentoring others using your subject matter expertise Have experience in cloud platforms such as AWS/GCP/Azure etc Be adept on Cloud native constructs and deployments Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Work Location- Pune Experience:- 3+ years Required Skills & Qualifications 3+ years of professional Python development experience . Strong expertise in Django (Django REST Framework preferred). Advanced SQL skills (query optimization, joins, stored procedures, indexing). Proficiency in Pandas and NumPy for data manipulation. Hands-on experience in API development (REST, GraphQL, or FastAPI) . Experience with PostgreSQL, MySQL, or other relational databases . Knowledge of ORM frameworks (Django ORM, SQLAlchemy) . Familiarity with Git, CI/CD, and Agile methodologies . Strong problem-solving and debugging skills. Preferred Qualifications Experience with FastAPI or Flask for lightweight API development. Knowledge of NoSQL databases (MongoDB, Redis) . Exposure to cloud platforms (AWS, GCP, Azure) and Docker . Understanding of data warehousing and ETL processes . Familiarity with asynchronous programming (Celery, asyncio) . Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. Principal Site Reliability Engineers at UKG are critical team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Principal Site Reliability Engineers must be passionate about learning and evolving with current technology trends. They strive to innovate and are relentless in pursuing a flawless customer experience. They have an “automate everything” mindset, helping us bring value to our customers by deploying services with incredible speed, consistency, and availability. Primary/Essential Duties And Key Responsibilities Engage in and improve the lifecycle of services from conception to EOL, including system design consulting, and capacity planning Define and implement standards and best practices related to: System Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Increase operational efficiency, effectiveness, and quality of services by treating operational challenges as a software engineering problem (reduce toil) Guide junior team members and serve as a champion for Site Reliability Engineering Actively participate in incident response, including on-call responsibilities Partner with stakeholders to influence and help drive the best possible technical and business outcomes. Engineering degree, or a related technical discipline, or equivalent work experience Experience coding in higher-level languages (e.g., Python, JavaScript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Demonstrable fundamentals in 2 of the following: Computer Science, Cloud Architecture, Security, or Network Design fundamentals Working experience with industry standards like Terraform, Ansible (Experience, Education, Certification, License and Training) Must have at least 10 years of hands-on experience working in Engineering or Cloud Minimum 6 years' experience with public cloud platforms (e.g. GCP, AWS, Azure) Minimum 5 years' Experience in configuration and maintenance of applications and/or systems infrastructure for large scale customer facing company Experience with distributed system design and architecture. Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com Show more Show less

Posted 1 day ago

Apply

1.0 - 3.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Role Overview CuriousBox AI is looking for a talented and driven Backend Developer to join our core founding team. As one of the first engineering hires, you’ll play a pivotal role in designing, building, and scaling our AI-driven products. If you thrive on challenges and are passionate about generative AI, come help shape the future at CuriousBox Ai! Key Responsibilities Architect, develop, and maintain backend services and APIs using Python and TypeScript. Design and optimize data models, queries, and operations for MongoDB. Collaborate with other developers to integrate generative AI applications into products. Maintain high code quality, security, and reliability in a fast-paced, dynamic environment. Optimize performance, scalability, and reliability of backend systems. Write thoroughly tested, well-documented code and participate in code reviews. Contribute ideas for best practices, architecture, and process improvements as a key member of the founding team. Requirements 1-3 years of professional experience in backend development. Proficiency in Python and TypeScript. Experience with MongoDB or similar NoSQL databases. Hands-on experience creating and deploying applications utilizing generative AI models (e.g., LLMs, diffusion models, transformers, etc.). Solid understanding of RESTful API design, authentication, and security practices. Familiarity with cloud platforms (e.g., AWS, GCP, Azure) is a plus. Strong problem-solving, debugging, and analytical skills. Excellent collaboration and communication skills. Willingness to work from our New Delhi office at least twice a week initially, with openness to full-time in-office as the team scales. Nice to Have Experience working in early-stage startups or as a founding engineer. Exposure to microservices architecture. Knowledge of containerization (Docker, Kubernetes). Interest in building scalable, data-driven products in a creative environment. What We Offer Opportunity to be a core team member at a fast-growing AI startup. ESOPs (Employee Stock Options) on offer. Impactful ownership of products and technology decisions. Competitive salary and benefits (10-15 LPA). Highly collaborative, growth-oriented culture. Show more Show less

Posted 1 day ago

Apply

20.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Job Description Scope : Global - On-Prem, GCP, Azure, Office 365, Service Now Team Size: 160+ team members across 23 different countries Remote/Hybrid/Onsite: Remote possible w/ regular travel (1-2 times a month) Enterprise Technology is searching for a Senior Director of Digital Employee Experience and Support who will be responsible for driving the strategic direction and operational excellence of digital employee experience (DEX) and related support functions. The role involves leading a globally distributed team to deliver exceptional employee experiences, enhance user engagement, drive automation opportunities, modernize support options, and implement innovative solutions to improve customer satisfaction and operational efficiency. This is a great opportunity to apply your unique product, design and technology skillsets to create an exceptional customer experience that is focused on automation, self service, and driving improved employee productivity. This pivotal role demands a transformational, strategic and operationally savvy leader to inspire excellent customer support, handle critical customer concerns, develop talent, and orchestrate innovation and advocacy! Responsibilities Objectives : Develop and implement a strategic approach to transform DEX, Support, and Program/Project Mgmt. Directly oversee all aspects of the transformation journey, from envisioning the new state and strategizing the transformation to realizing the anticipated changes. Establish and maintain a governance framework to enable visibility into the execution of our strategy and provide oversight and leadership to course-correct where necessary. Leverage deep industry connections to stay at the forefront of workplace trends, employee experience, and support innovations. Develop and lead a team specializing in Digital Employee Experience (DEX), automation, Program/Project Mgmt, and technology support. Lead technology modernization projects across distributed sites, including workplace & manufacturing locations Modernize support services, including virtual service desks, physical tech lounges, Site IT teams, and distributed Program/Project Management services. Modernize Program/Project Mgmt capabilities aligned with Agile methodologies and product-led organizations. Prepare and present comprehensive reports on team performance using stretch objectives, key results, and key performance indicators. Establish and maintain strong partnerships with key stakeholders across IT, information security, product engineering, human resources, and facilities management. Global responsibilities include User Experience, Service Delivery, Service Provisioning, Service Desk Operations, Tech Lounge Operations, Executive Support Operations, Site IT Mgmt for 330+ locations, IT Program/Project Mgmt Services, and Business Relationship Mgmt. Qualifications Basic Requirements: 20+ years total combined IT experience, with at least 15 years leading large technical delivery 5+ years' experience in designing and implementing end-user, employee, and support services Demonstrated experience in designing, building, and managing End User, Digital Employee Experience, and Support services, preferably intended for hybrid working environments in large enterprises Experience in formulating and implementing Employee Experience strategy in support of Workplace Modernization, Transformation, and Productivity Improvements MBA, PhD, or equivalent experience preferred Experience with End User technology products Experience with User Experience design Experience with End User Support services (e.g., Help Desk, Tech Lounge, Executive Support) Experience with Video Conferencing and Collaboration services Experience with Microsoft 365 and MS Teams Experience with Microsoft Windows and Apple Mac enteprise solutions Experience with Service Now Platform Experience with GCP and Azure Clouds a plus Have a bias for value, speed, and quality to implement strategic goals in the direction of Digital Employee Experiences, improving Employee Productivity, driving Excellent Customer Service, and leveraging automation and self-service to enable End Users. Preferred Requirements: Deep experience in leading Employee Experience, End-User, and IT Support services for large enterprises Visionary leader who maintains an evergreen view of our future state, challenges the status quo, and delivers measurable results against a strategic roadmap Passion in driving improved employee experiences across a large enterprises Maintains deep connection to industry leaders & peers – leading industry innovation and sharing trends Actively assumes ownership of new initiatives and showcases steadfast outcomes Experience researching and implementing new/emerging tech to drive improvements & business value Experienced operational leader, driving excellence and continuous improvement with technology teams Deep technical leadership experience with ITIL and ITSM toolsets, preferably ServiceNow Deep technical leadership experience with Prog/Project Mgmt, Agile, Product-led Orgs, & tools (Atlassian, Jira, Automated - CI/CD Pipelines, etc…) Extensive experience managing third-party vendors delivering IT services in a large enterprise Experience in infrastructure strategy, public and private cloud, security, server, storage, and IT ops Strong budget management skills and proven success in delivering IT service delivery and support Highly collaborative w/ strong influencing skills, highly resourceful, self-driven & results oriented Inspire a diverse, globally distributed team, fostering collaboration, innovation, and continuous improvement Highly organized, an effective communicator, and a natural influencer Demonstrated ability to Recruit, develop, inspire, and retain high-performing professionals Effective at working with geographically remote and culturally diverse teams Experience working in a matrixed team structure and influencing across product areas Manage a portfolio of projects in a fast-paced environment, adapting to shifting priorities Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

This role is for one of Weekday's clients Salary range: Rs 1800000 - Rs 2200000 (ie INR 18-22 LPA) Min Experience: 3 years Location: Gurugram, NCR, Delhi JobType: full-time Requirements About the Role: We are seeking an experienced and highly motivated Python Developer to join our dynamic technology team. As a Python Developer, you will play a key role in designing, developing, testing, and maintaining robust and scalable software applications using Python. This role demands a solid foundation in Python programming and the ability to work collaboratively within a cross-functional team of developers, data engineers, and product managers. Key Responsibilities: Application Development: Design and develop scalable and high-performance applications using Python. Code Quality: Write clean, efficient, and well-documented code following best practices and coding standards. Testing & Debugging: Develop and maintain unit and integration tests to ensure code reliability. Troubleshoot, debug, and upgrade existing systems as needed. API Integration: Design and integrate RESTful APIs with frontend applications and third-party services. Database Management: Work with relational and NoSQL databases such as MySQL, PostgreSQL, or MongoDB to manage data models and queries. Collaboration: Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Version Control: Use Git for version control and participate in code reviews to ensure quality and consistency. Documentation: Maintain technical documentation to support development processes and future maintenance. Key Skills & Qualifications: Proven Experience: At least 3 years of hands-on experience in Python application development. Core Python: Strong grasp of core Python concepts, including data structures, OOP, exception handling, decorators, and context managers. Frameworks: Proficiency with Python frameworks such as Django, Flask, or FastAPI. Database Knowledge: Experience working with SQL databases like PostgreSQL or MySQL and familiarity with ORM libraries like SQLAlchemy or Django ORM. API Development: Ability to build and consume RESTful APIs; understanding of API authentication mechanisms such as JWT or OAuth. Testing & CI/CD: Familiarity with testing frameworks like PyTest or unittest. Experience with continuous integration tools is a plus. Cloud & Deployment: Exposure to deploying applications on cloud platforms such as AWS, GCP, or Azure is desirable. Version Control: Proficiency with Git for source control and experience with GitHub/GitLab workflows. Soft Skills: Strong problem-solving abilities, excellent communication skills, and a collaborative attitude. Preferred Qualifications (Nice to Have): Experience with containerization tools like Docker. Familiarity with task queues and asynchronous programming (e.g., Celery, asyncio). Knowledge of frontend technologies (HTML, CSS, JavaScript) is a plus. Understanding of Agile development methodologies. Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

19 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities JOB DESCRIPTION Strong on programming languages like Python , Java Must have one cloud hands-on experience ( GCP preferred) Must have: Experience working with Dockers Must have: Environments managing (e.g venv, pip, poetry, etc.) Must have: Experience with orchestrators like Vertex AI pipelines, Airflow, etc Must have: Data engineering, Feature Engineering techniques Proficient in either Apache Spark or Apache Beam or Apache Flink Must have: Advance SQL knowledge Must be aware of Streaming concepts like Windowing , Late arrival , Triggers etc Should have hands-on experience on Distributed computing Should have working experience on Data Architecture design Should be aware of storage and compute options and when to choose what Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). Should have a Business mindset to understand data and how it will be used for BI and Analytics purposes. Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform) Good to have, Hands-on experience on Kubernetes Good to have Vector based Database like Qdrant Experience in working with GCP tools like: Storage: CloudSQL , Cloud Storage, Cloud Bigtable, BigQuery, Cloud Spanner, Cloud DataStore, Vector database Ingest : Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do This position is at the forefront of Equifax's post cloud transformation, focusing on developing and enhancing Java applications within the Google Cloud Platform (GCP) environment. The ideal candidate will combine strong Java development skills with cloud expertise to drive innovation and improve existing systems Key Responsibilities Design, develop, test, deploy, maintain, and improve software applications on GCP Enhance existing applications and contribute to new initiatives leveraging cloud-native technologies Implement best practices in serverless computing, microservices, and cloud architecture Collaborate with cross-functional teams to translate functional and technical requirements into detailed architecture and design Participate in code reviews and maintain high development and security standards Provide technical oversight and direction for Java and GCP implementations What Experience You Need Bachelor's or Master's degree in Computer Science or equivalent experience 1+ years of IT experience with a strong focus on Java development Experience in modern Java development and cloud computing concepts Familiarity with agile methodologies and test-driven development (TDD) Strong understanding of software development best practices, including continuous integration and automated testing What could set you apart Experience with GCP or other cloud platforms (AWS, Azure) Active cloud certifications (e.g., Google Cloud Professional certifications) Experience with big data technologies (Spark, Kafka, Hadoop) and NoSQL databases Knowledge of containerization and orchestration tools (Docker, Kubernetes) Familiarity with financial services industry Experience with open-source frameworks (Spring, Ruby, Apache Struts, etc.) Experience with Python Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Design, develop, and deploy AI/NLP solutions to solve diverse business challenges—particularly in areas like text classification, information extraction, summarization, and semantic search Conduct exploratory data analysis and feature engineering Contribute to the development initiatives in the GenAI domain, focusing on cutting-edge technologies like Large Language Models, Retrieval-Augmented Generation, and autonomous agents. Validate and monitor solution quality using real-world feedback data Work closely with ML engineers and DevOps teams to operationalize models (on cloud and on-prem environments) Hands-on experience on deploying solutions to cloud-native AI platforms (AWS/Azure/GCP) Collaborate with clients and business stakeholders to scope and refine requirements, validate model behavior, and ensure successful deployment Explore and experiment with LLMs, prompt engineering, and retrieval-augmented generation (RAG) techniques for advanced use cases Contribute to building reusable components, best practices, and scalable frameworks for AI delivery Exeperience of development of retrieval-augmented systems by combining LLMs with document retrieval, clustering, and search techniques. Qualifications 3–6 years of hands-on experience in data science, with a focus on NLP, deep learning, and machine learning applications Strong programming skills in Python; experience with relevant libraries such as scikit-learn, spaCy, NLTK, PyTorch, TensorFlow, or Hugging Face Proven experience in delivering NLP/LLM-based solutions Familiarity with cloud platforms (AWS, Azure, or GCP) and experience with deploying AI models to production Ability to handle end-to-end ownership of solutions, from POC to deployment Prior experience in consulting or client-facing data science roles is a plus Exposure to document databases (e.g., MongoDB), graph databases, or vector databases (e.g., FAISS, Pinecone) is a bonus Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

📢 We’re Hiring: Senior API Developer 📍 Company : YugAlpha Tech Pvt Ltd 🕒 Employment Type : Full-Time 💼 Experience : 3+ Years 🌐 About YugAlpha Tech Pvt Ltd YugAlpha Tech Pvt Ltd is a fast-growing IT company delivering end-to-end software solutions, technical training, and live project experiences. Our mission is to build scalable, high-performance applications powered by modern APIs — and we’re looking for a skilled Senior API Developer to lead this effort. 💼 Role Overview: Senior API Developer As a Senior API Developer , you will design, develop, and manage robust APIs that serve as the backbone of our digital products. You’ll lead backend architecture decisions, mentor junior developers, and collaborate across teams to build scalable, secure systems. 🔧 Key Responsibilities Architect and implement high-performance, secure, and scalable APIs Develop and maintain RESTful and GraphQL services Design and optimize database schemas (SQL and NoSQL) Integrate third-party APIs and manage internal/external endpoints Lead code reviews, establish best practices, and ensure code quality Work with DevOps teams on CI/CD, version control, and deployment pipelines Mentor junior/trainee developers and contribute to technical documentation Troubleshoot performance issues and implement security measures 📌 Required Skills & Qualifications Bachelor's degree in Computer Science, IT, or a related field 3+ years of experience in backend/API development Strong proficiency in Node.js , Express , or Python (Django/Flask) Experience with REST , GraphQL , OAuth2 , and JWT authentication Solid understanding of MongoDB , MySQL , or PostgreSQL Familiarity with cloud platforms (AWS, GCP, or Azure) and API Gateways Knowledge of API documentation tools like Swagger , Postman Excellent problem-solving, debugging, and optimization skills Team leadership experience is a plus 🌟 What We Offer Ownership of meaningful backend/API systems Work on real-world client and internal projects Dynamic and growth-oriented work environment Competitive salary and performance bonuses Career advancement and leadership opportunities 📩 How to Apply 📧 Send your resume to: 👉 info@yugalpha.tech 👉 hr@yugalpha.tech 📝 Subject Line : Application for Senior API Developer 🚀 Lead the next generation of backend systems. Join YugAlpha Tech Pvt Ltd as a Senior API Developer and build API-first solutions that scale. 🔗 Follow us on Instagram for hiring updates & tech content : 👉 https://www.instagram.com/yugalpha_tech 🔖 #SeniorAPIDeveloper #BackendEngineering #NodeJS #Python #RESTAPI #GraphQL #YugAlphaTech #WeAreHiring #TechCareers #JoinOurTeam Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title "Senior DevOps Engineer at Certify, Remote Role" Company Details CertifyOS revolutionizes U.S. healthcare by providing API-first, UI-agnostic platforms for seamless provider network management. Automating verification and credentialing with extensive primary source integrations, CertifyOS ensures efficient, real-time data access and supports all provider networking needs. Located in New York City, Series A funded. Job Roles & Responsibilities - Design and implement scalable infrastructure on AWS and GCP to support CertifyOS's API-first platform. - Automate cloud provisioning, monitoring, and scaling using tools like Terraform and Ansible. - Develop CI/CD pipelines using Docker and Kubernetes to streamline deployments. - Collaborate to optimize cloud services and reduce operational costs. - Troubleshoot and resolve issues on the AWS, GCP, and Azure Cloud platforms. - Enhance platform reliability and performance using Google Kubernetes Engine (GKE) and Azure services. - Contribute to infrastructure-as-code solutions in Python, Go, and Groovy. - Support real-time data integration for healthcare provider network management. Cultural Expectations - Collaborate effectively across teams to ensure seamless integration and automation of provider data processes. - Embrace agility and innovation in managing cloud infrastructure for real-time healthcare data solutions. - Commit to excellence and accuracy in service delivery and provider verification. - Engage openly in problem-solving, valuing diverse perspectives to overcome technical challenges. - Lead with a continuous improvement mindset, proactively identifying and implementing process enhancements. Hiring Process Phone screening - Talent team 45 min intro call with the Hiring Manager 90 mins technical screening - Hiring Manager (Involves live coding in Codility) Values Interview Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Job Title: Engineering Manager - Web Location: Mohali, India Company Overview: Leveraging over 12 years of experience, VT Netzwelt Pvt. Ltd. is a globally trusted technology partner recognized for its deep technical expertise, agile delivery model, and unwavering commitment to quality. We specialize in the design, development, and maintenance of high-performance web, mobile, and e-commerce applications for clients across retail, healthcare, education, and finance sectors. With 130+ full-time experts across India, Europe, the USA, and Australia, we deliver innovative digital solutions that solve complex technical challenges. Our clients include publicly listed enterprises, multinational corporations, and fast-scaling startups-all of whom value our engineering excellence, agile practices, and strong domain understanding. Position Summary We are looking for an Engineering Manager to lead our Web Team, focusing on delivering robust, scalable, and maintainable web solutions for global clients. This role is ideal for a technically hands-on leader who is passionate about engineering excellence, team development, and high-quality project delivery. Key Responsibilities ● Lead the day-to-day engineering operations of the Web Department, overseeing end-to-end web application development. ● Work closely with the Delivery team to ensure timely, quality-driven, and efficient execution of web projects. ● Mentor and guide engineers of varying experience levels; support career growth and skill development. ● Drive the adoption of best coding practices, peer code reviews, periodic technical evaluations, and modern DevOps principles. ● Strong awareness of AI-assisted development practices including Prompt Engineering, usage of modern AI-enabled IDEs such as Cursor, Windsurf, ClaudeCode Terminal, and familiarity with the broader AI tooling ecosystem to enhance developer productivity and code quality. ● Proven expertise in developing scalable distributed systems leveraging diverse architectural paradigms, including Serverless, Microservices, and Hybrid architectures. ● Tech-forward mindset with a passion for continuous learning—champions experimentation, keeps pace with emerging trends, and leads the team’s adoption of modern frameworks, scalable architectures, and AI-powered development tools. ● Participate in planning and estimation exercises, ensuring effort alignment with technical complexity. ● Collaborate with Solution Architects to ensure optimal system design and architecture. ● Monitor key engineering metrics such as quality, velocity, and bug density to drive continuous improvement. Preferred Background & Experience ● 10+ years of web development experience with at least 3+ years in a team lead or engineering management role. ● Strong technical foundation in JavaScript, TypeScript, ReactJS NodeJS, NestJS, or similar web technologies. ● Proven experience in architecting and delivering modern, scalable web applications. ● Familiarity with DevOps, CI/CD practices, and cloud platforms (AWS, GCP, Azure) is a plus. ● Experience managing or collaborating with cross-functional teams including Mobile, QA, and DevOps. ● Excellent communication and leadership skills with a collaborative mindset. Why Join Us ● Lead the Web Department in a company known for its engineering excellence and global impact. ● Work on diverse projects across eCommerce, Healthcare, Education, and Fintech. ● Be part of a collaborative, innovation-driven environment where your ideas matter. ● Benefit from a flat hierarchy, open communication culture, and continued learning opportunities. ● Competitive compensation and a chance to shape the technical direction of high-visibility projects. Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Data Engineer Job Summary Data Engineers will be responsible for the design, development, testing, maintenance, and support data assets including; Azure Data Lake and data warehouse development, modeling, package creation, SQL script creation, stored procedure development, integration services support among other responsibilities. Candidate have at least 3-5 years hands-on Azure experience as a Data Engineer, must be an expert in SQL and have extensive expertise building data pipelines. Candidate will be accountable for meeting deliverable commitments including schedule and quality compliance. This Candidate must have skills to plan and schedule own work activities, coordinate activities with other cross-functional team members to meet project goals. Basic Understanding Of Scheduling and workflow management & working experience in either ADF, Informatica, Airflow or Similar Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar Architecture and data modelling for Data Lake on cloud & working experience in Amazon WebServices (AWS), Microsoft Azure, Google Cloud Platform (GCP) Basic understanding of Build and Release management & working experience in Azure DevOps, AWS CodeCommitt or Similar Strong In Writing code in programming language & working experience in Python, PySpakrk, Scala or Similar Big Data Framework & working experience in Spark or Hadoop or Hive (incl. derivatives like pySpark (prefered), SparkScala or SparkSQL) or Similar Data warehouse working experience of concepts and development using SQL on single (SQL Server, Oracle or Similar) and parallel platforms (Azure SQL Data Warehouse or Snowflake) Code Management & working experience in GIT Hub, Azure DevOps or Similar End to End Architecture and ETL processes & working experience in ETL Tool or Similar Reading Data Formats & working experience in JSON, XML or Similar Data integration processes (batch & real time) using tools & working experience in either Informatica PowerCenter and/or Cloud, Microsoft SSIS, MuleSoft, DataStage, Sqoop or Similar Writing requirement, functional & technical documentation & working experience in Integration design document, architecture documentation, data testing plans or Similar SQL queries & working experience in SQL code or Stored Procedures or Functions or Views or Similar Database & working experience in any of the database like MS SQL, Oracle or Similar Analytical Problem Solving skills & working experience in resolving complex problems or Similar Communication (read & write in English), Collaboration & Presentation skills & working experience as team player or Similar Good To Have Stream Processing & working experience in either Databricks Streaming, Azure Stream Analytics or HD Insight or Kinesis Data Analytics or Similar Analytical Warehouse & working experience in either SQL Data Warehouse or Amazon Athena or AWS Redshift or Big Query or Similar Real-Time Store & working experience in either Azure Cosmos DB or Amazon Dynamo-DB or Cloud Bigdata or Similar Batch Ingestion & working experience in Data Factory or Amazon Kinesis or Lambda or Cloud Pub/Sub or Similar Storage & working experience in Azure Data Lake Storage GEN1/GEN2 or Amazon S3 or Cloud Storage or Similar Batch Data Processing & working experience in either Azure Databricks or HD Insight or Amazon EMR or AWS Glue or Similar Orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Data Scientist - Retail & E-commerce Analytics with Personalization, Campaigns & GCP/BigQuery Expertise We are looking for a skilled Data Scientist with strong expertise in Retail & E-commerce Analytics , particularly in personalization , campaign optimization , and Generative AI (GenAI) , along with hands-on experience working with Google Cloud Platform (GCP) and BigQuery . The ideal candidate will use data science methodologies and advanced machine learning techniques to drive personalized customer experiences, optimize marketing campaigns, and create innovative solutions for the retail and e-commerce business. This role will also involve working with large-scale datasets on GCP and performing high-performance analytics using BigQuery . Responsibilities E-commerce Analytics & Personalization : Develop and implement machine learning models for personalized recommendations , product search optimization , and customer segmentation to improve the online shopping experience. Analyze customer behavior data to create tailored experiences that drive engagement, conversions, and customer lifetime value. Build recommendation systems using collaborative filtering , content-based filtering , and hybrid approaches. Use predictive modeling techniques to forecast customer behavior, sales trends, and optimize inventory management. Campaign Optimization Analyze and optimize digital marketing campaigns across various channels (email, social media, display ads, etc.) using statistical analysis and A/B testing methodologies. Build predictive models to measure campaign performance, improving targeting, content, and budget allocation. Utilize customer data to create hyper-targeted campaigns that increase customer acquisition, retention, and conversion rates. Evaluate customer interactions and campaign performance to provide insights and strategies for future optimization. Generative AI (GenAI) & Innovation Use Generative AI (GenAI) techniques to dynamically generate personalized content for marketing, such as product descriptions, email content, and banner designs. Leverage Generative AI to synthesize synthetic data, enhance existing datasets, and improve model performance. Work with teams to incorporate GenAI solutions into automated customer service chatbots, personalized product recommendations, and digital content creation. Big Data Analytics With GCP & BigQuery Leverage Google Cloud Platform (GCP) for scalable data processing, machine learning, and advanced analytics. Utilize BigQuery for large-scale data querying, processing, and building data pipelines, allowing efficient data handling and analytics at scale. Optimize data workflows on GCP using tools like Cloud Storage , Cloud Functions , Cloud Dataproc , and Dataflow to ensure data is clean, reliable, and accessible for analysis. Collaborate with engineering teams to maintain and optimize data infrastructure for real-time and batch data processing in GCP. Data Analysis & Insights Perform data analysis across customer behavior, sales, and marketing datasets to uncover insights that drive business decisions. Develop interactive reports and dashboards using Google Data Studio to visualize key performance metrics and findings. Provide actionable insights on key e-commerce KPIs such as conversion rate , average order value (AOV) , customer lifetime value (CLV) , and cart abandonment rate . Collaboration & Cross-Functional Engagement Work closely with marketing, product, and technical teams to ensure that data-driven insights are used to inform business strategies and optimize retail e-commerce operations. Communicate findings and technical concepts effectively to stakeholders, ensuring they are actionable and aligned with business goals. Key Technical Skills Machine Learning & Data Science : Proficiency in Python or R for data manipulation, machine learning model development (scikit-learn, XGBoost, LightGBM), and statistical analysis. Experience building recommendation systems and personalization algorithms (e.g., collaborative filtering, content-based filtering). Familiarity with Generative AI (GenAI) technologies, including transformer models (e.g., GPT), GANs , and BERT for content generation and data augmentation. Knowledge of A/B testing and multivariate testing for campaign analysis and optimization. Big Data & Cloud Analytics Hands-on experience with Google Cloud Platform (GCP) , specifically BigQuery for large-scale data analytics and querying. Familiarity with BigQuery ML for running machine learning models directly in BigQuery. Experience working with GCP tools like Cloud Dataproc , Cloud Functions , Cloud Storage , and Dataflow to build scalable and efficient data pipelines. Expertise in SQL for data querying, analysis, and optimization of data workflows in BigQuery . E-commerce & Retail Analytics Strong understanding of e-commerce metrics such as conversion rates , AOV , CLV , and cart abandonment . Experience with analytics tools like Google Analytics , Adobe Analytics , or similar platforms for web and marketing data analysis. Data Visualization & Reporting Proficiency in data visualization tools like Tableau , Power BI , or Google Data Studio to create clear, actionable insights for business teams. Experience developing dashboards and reports that monitor KPIs and e-commerce performance. Desired Qualifications Bachelor's or Master's degree in Computer Science , Data Science , Statistics , Engineering , or related fields. 5+ years of experience in data science , machine learning , and e-commerce analytics , with a strong focus on personalization , campaign optimization , and Generative AI . Hands-on experience working with GCP and BigQuery for data analytics, processing, and machine learning at scale. Proven experience in a client-facing role or collaborating cross-functionally with product, marketing, and technical teams to deliver data-driven solutions. Strong problem-solving abilities, with the ability to analyze large datasets and turn them into actionable insights for business growth. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Medline: Medline is America's largest privately held national manufacturer and distributor of healthcare supplies and services. Today, Medline manufactures and distributes more than 550,000 medical products, encompassing medical-surgical items and one of the largest textile lines in the industry. With 17 manufacturing facilities worldwide and over 25 joint venture manufacturing plants worldwide, along with 45 distribution centers in North America and 50 throughout the world, Medline posted $ 20.2 billion in revenue last year. Medline is ranked #27 in the Forbes 2019 list of America’s private companies. Job Description Medline is seeking a skilled DevOps Engineer with 5+ years of experience in DevOps and cloud technologies to lead our DevOps initiative. The ideal candidate will have extensive experience with Azure DevOps or Jenkins, SonarQube, JFrog Artifactory, PrismaCloud, Docker, Kubernetes. The candidate should be able to work in a fast-paced environment, and have a passion for driving continuous improvement and automation. Job Responsibilities Design and implement continuous integration and continuous delivery/deployment (CI/CD) processes in the form of a pipeline for the delivery of software across the enterprise Should be able to work independently on DevOps initiatives Support DevOps team in defining the best practices for CI/CD across the full SDLC Support internal customers in the CI/CD lifecycle and Work closely with multiple teams in an agile environment Skill And Experience 4 + years of IT experience of working with software development, release and build engineering teams 3+ years DevOps Engineering experience DevOps experience must go beyond the use of release processes and into the actual design, development, implementation, and definition of best practices for CI/CD processes Experience integrating these stages into a CD pipeline 4+ Years Continuous Integration experience with industry standard tooling CI experience with the following is a must: Azure DevOps/GitHub with GitHub Actions, Jenkins, GitLab, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution Experience utilizing APIs to integrate CI, testing, and deployments of systems across the SDLC Very strong understanding of modern IT infrastructure components and their integrations Any Scripting languages (e.g. Groovy, Bash, Grunt, PowerShell, Python, Ruby) Experience in managing and maintaining containerized applications using Docker and Kubernetes. Experience in managing and maintaining servers including activities like patching, upgrade etc. Experience building and deploying .Net applications, APIs, Web services, UI code etc. Good to have skills: Experience on building DevOps Analytics/Dashboard Should have Good hands-on experience on Azure, GCP or AWS. Experience with infrastructure as code (IaC) using Terraform and Ansible.(Optional) Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Do you want to be part of an inclusive team that works to develop innovative therapies for patients? Every day, we are driven to develop and deliver innovative and effective new medicines to patients and physicians. If you want to be part of this exciting work, you belong at Astellas! Astellas Pharma Inc. is a pharmaceutical company conducting business in more than 70 countries around the world. We are committed to turning innovative science into medical solutions that bring value and hope to patients and their families. Keeping our focus on addressing unmet medical needs and conducting our business with ethics and integrity enables us to improve the health of people throughout the world. For more information on Astellas, please visit our website at www.astellas.com . This position is based in Bengaluru and will require some on-site work. Purpose And Scope As a Data and Analytics Developer, you will play a crucial role in transforming raw data into valuable insights. You’ll work closely with business stakeholders to understand their requirements and translate them into technical specifications. Your responsibilities will include developing and maintaining business intelligence (BI) and ETL solutions, creating visualizations, and ensuring data accuracy. Essential Job Responsibilities Collaborate with key stakeholders to gather requirements and translate them into technical specifications Contribute to the efficient administration of multi-server environments. Participate in smaller focused mission teams to deliver value driven solutions aligned to our global and bold move priority initiatives and beyond. Provide Technical Support to internal users troubleshooting complex issues and ensuring system uptime as soon as possible. Participate in the continuous delivery pipeline. Adhering to DevOps best practices for version control automation and deployment. Ensuring effective management of the FoundationX backlog. Leverage your knowledge of data engineering principles to integrate with existing data pipelines and explore new possibilities for data utilization. Stay-up to date on the latest trends and technologies in data engineering and cloud platforms. Qualifications Required Bachelor's degree in computer science, information technology, or related field (or equivalent experience.) Bachelor's degree in computer science, information technology, or related field (or equivalent experience.) 3-5+ years proven experience as a Tester, Developer or Data Analyst within a Pharmaceutical or working within a similar regulatory environment. 3-5+ years of experience in using BI Development, ETL Development, QlikSense, PowerBI or equivalent technologies Experience working with data warehousing and data modelling Knowledge of database management systems (e.g., SQL Server, Oracle, MySQL). Understanding of ETL (Extract, Transform, Load) processes and data integration techniques. Understanding ability to install/ upgrade Qlik, Tableau and or PowerBI architecture or any equivalent technology within a cloud-based environment (AWS, Azure or GCP for example.) QLIK/Tableau: Proficiency in designing, developing, and maintaining QLIK/Tableau applications. Experience with QLIK Sense and QLIKView is highly desirable. Experience working with N-Printing, Qlik Alerting Conducting Unit Testing and troubleshooting BI systems Data Analysis and Automation Skills: Proficient in identifying, standardizing, and automating critical reporting metrics Data Validation and Quality Assurance: Certified Developer in any of AWS/Azure / DataBricks Preferred Experience working in the Pharma/ Lifesciences industry or similar complex regulated industry. Experience in storytelling with data Visualisation best practices Knowledge/experience using Qlik, or PowerBI SaaS solutions. Experience with other BI tools (Tableau, D3.js) is a plus. Analytical Thinking: Demonstrated ability to lead ad hoc analyses, identify performance gaps, and foster a culture of continuous improvement. Agile Champion: Adherence to DevOps principles and a proven track record with CI/CD pipelines for continuous delivery. Working Environment At Astellas we recognize the importance of work/life balance, and we are proud to offer a hybrid working solution allowing time to connect with colleagues at the office with the flexibility to also work from home. We believe this will optimize the most productive work environment for all employees to succeed and deliver. Hybrid work from certain locations may be permitted in accordance with Astellas’ Responsible Flexibility Guidelines. \ Category FoundationX Astellas is committed to equality of opportunity in all aspects of employment. EOE including Disability/Protected Veterans Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

What will you do? Quality Assurance / Test Strategy & Execution Work closely with QA Team and implement QA Practices and strategies according to best practices and quality standards Understand and follow SDLC processes to meet quality goals at the product level Work as part of cross-functional scrum teams ("PODs"), developing applications using agile methodologies Provide guidance and mentor team members on test activities and bioinformatics testing approaches Collaborate with development teams to address bugs and production defects quickly Bioinformatics Testing Expertise Interpret requirements to develop verification & validation test plans for bioinformatics applications Design test cases for sequence analysis software including alignment, variant calling, and annotation Design, create, and maintain bioinformatics data simulators and automated testing frameworks Work with product management to understand functional specifications for genomic applications Execute complex test scenarios and investigate issues in bioinformatics pipelines Documentation & Process Improvement Create comprehensive documentation to record testing phases and provide test execution reports Modify/update test protocols based on requirement changes; perform impact analysis Stay current with new testing tools and strategies in both QA and bioinformatics domains Evaluate and improve testing methodologies for continuous improvement Work with cross-functional teams to ensure quality throughout the product development lifecycle Requirements What do you bring to the table? Master's degree in Bioinformatics, Computational Biology or related field 7 + years of experience as software QA, with at least 3 years in bioinformatics or genomics Minimum 2 years working experience as a Bioinformatician or in genomic data analysis Demonstrated experience with Next Generation Sequencing tools (BWA, GATK, SAMtools, Tophat) Strong understanding of genomic data analysis, biostatistics, and data structures Experience with CI/CD pipelines and version control systems (Git) Knowledge of software engineering practices, SQA processes, and methodologies Professional Competencies Exceptional attention to detail and problem analysis abilities Strong communication skills (written and verbal) Ability to absorb complex information and integrate it into testing methodologies Focus on quality and continuous improvement Strategic vision to anticipate testing needs for emerging technologies Persuasiveness in advocating for quality standards and practices Experience in the medical device or life sciences industry Knowledge of handling/processing large genomic datasets Experience with cloud computing platforms (AWS, Azure, GCP) Experience with Docker and containerized applications Understanding of regulatory requirements for clinical software (HIPAA, FDA) Strong expertise in QA automation tools and frameworks (Selenium, RestAssured, Jenkins) Proficiency in Java and at least one scientific scripting language (Python/Java) Show more Show less

Posted 1 day ago

Apply

13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description Sandisk understands how people and businesses consume data and we relentlessly innovate to deliver solutions that enable today’s needs and tomorrow’s next big ideas. With a rich history of groundbreaking innovations in Flash and advanced memory technologies, our solutions have become the beating heart of the digital world we’re living in and that we have the power to shape. Sandisk meets people and businesses at the intersection of their aspirations and the moment, enabling them to keep moving and pushing possibility forward. We do this through the balance of our powerhouse manufacturing capabilities and our industry-leading portfolio of products that are recognized globally for innovation, performance and quality. Sandisk has two facilities recognized by the World Economic Forum as part of the Global Lighthouse Network for advanced 4IR innovations. These facilities were also recognized as Sustainability Lighthouses for breakthroughs in efficient operations. With our global reach, we ensure the global supply chain has access to the Flash memory it needs to keep our world moving forward. Job Description Job Description We are seeking a highly skilled and experienced Staff Engineer for Functional Modeling & Verification to join our innovative team in Bengaluru, India. As a Staff Engineer, you will play a crucial role in shaping our technical direction, leading complex projects, and mentoring junior engineers. Lead architectural decisions and provide technical guidance to cross-functional teams Collaborate with product managers and other stakeholders to define technical requirements and solutions Conduct code reviews and ensure code quality across projects Mentor and guide junior engineers, fostering their professional growth Identify and resolve complex technical issues across multiple projects Stay current with emerging technologies and industry trends, recommending innovations to improve our tech stack Contribute to the development of engineering best practices and coding standards Participate in system design discussions and technical planning sessions Optimize existing systems for improved performance and scalability Hands-on experience in C++ & System C based Model development/test creation Prior Experience with C based Tests/Test bench development Python coding would be a plus Knowledge on NAND concepts will be an advantage Knowledge on Memory and Digital Design Concepts would be preferable (SRAM/DRAM/ROM/Flash) Circuits/Logic Participate in design / modeling reviews and provide technical guidance to junior engineers. Document all phases of Modeling releases and development for future reference and maintenance. Stay updated with the latest technologies and trends in NAND Flash and Modeling. Languages Expertise C, C++, Python, System C, SystemVerilog/UVM will be a plus Tool Expertise VisualStudio, Git, Bitbucket Hands-on contributions coding C++ & System C models & test creation Debug issues in Firmware environment Validating the developed model using SV/UVM testbench Debug failures and root-cause it by interacting with other teams/groups Etc. Qualifications Qualifications Bachelor's or Master's degree in Computer Science or a related field BE/BTech/ME/MTech in Engineering with Computer Science, ECE or related field MSc/MCA in Computer Science or a related field 13+ years of software engineering experience, with a proven track record of leading complex technical projects Expert-level proficiency in one or more programming languages such as Java, Python, or C++ Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and distributed systems In-depth knowledge of system design, architecture, and performance optimization Proficiency in version control systems, preferably Git Ability to work effectively in a fast-paced, agile environment Strong analytical and detail-oriented approach to software development Additional Information Sandisk thrives on the power and potential of diversity. As a global company, we believe the most effective way to embrace the diversity of our customers and communities is to mirror it from within. We believe the fusion of various perspectives results in the best outcomes for our employees, our company, our customers, and the world around us. We are committed to an inclusive environment where every individual can thrive through a sense of belonging, respect and contribution. Sandisk is committed to offering opportunities to applicants with disabilities and ensuring all candidates can successfully navigate our careers website and our hiring process. Please contact us at jobs.accommodations@sandisk.com to advise us of your accommodation request. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying. Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Design, develop, and deploy AI/NLP solutions to solve diverse business challenges—particularly in areas like text classification, information extraction, summarization, and semantic search Conduct exploratory data analysis and feature engineering Contribute to the development initiatives in the GenAI domain, focusing on cutting-edge technologies like Large Language Models, Retrieval-Augmented Generation, and autonomous agents. Validate and monitor solution quality using real-world feedback data Work closely with ML engineers and DevOps teams to operationalize models (on cloud and on-prem environments) Hands-on experience on deploying solutions to cloud-native AI platforms (AWS/Azure/GCP) Collaborate with clients and business stakeholders to scope and refine requirements, validate model behavior, and ensure successful deployment Explore and experiment with LLMs, prompt engineering, and retrieval-augmented generation (RAG) techniques for advanced use cases Contribute to building reusable components, best practices, and scalable frameworks for AI delivery Experience of development of retrieval-augmented systems by combining LLMs with document retrieval, clustering, and search techniques. Qualifications 3–6 years of hands-on experience in data science, with a focus on NLP, deep learning, and machine learning applications Strong programming skills in Python; experience with relevant libraries such as scikit-learn, spaCy, NLTK, PyTorch, TensorFlow, or Hugging Face Proven experience in delivering NLP/LLM-based solutions Familiarity with cloud platforms (AWS, Azure, or GCP) and experience with deploying AI models to production Ability to handle end-to-end ownership of solutions, from POC to deployment Prior experience in consulting or client-facing data science roles is a plus Exposure to document databases (e.g., MongoDB), graph databases, or vector databases (e.g., FAISS, Pinecone) is a bonus Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: HashiCorp Developer (Vault Specialist) Role: Senior Consultant Experience: 5-8 years Location: IN - Bangaluru, Karnataka (Hybrid) Job Summary We are seeking a highly skilled and motivated HashiCorp Developer with over 5 years of experience to join our dynamic Privileged Access Management team. The ideal candidate will possess deep expertise in the implementation, administration, and ongoing support of HashiCorp Vault in enterprise-grade environments. This role involves integration with both on-premises and cloud-native applications, driving automation, ensuring scalability, and delivering secure and efficient secrets management solutions. Responsibilities Vault Implementation & Administration: Architect, implement, and maintain HashiCorp Vault clusters in both development and production environments. Design and manage secrets lifecycle, access policies, token/lease management, and key rotation. Monitor and tune performance, availability, and security of Vault services. Integration Expertise: Seamlessly integrate HashiCorp Vault with various applications and platforms including: On-premise applications Cloud-native services (AWS, Azure, GCP) Languages and frameworks such as PowerShell, Python, Java, and RESTful APIs. Develop plugins and connectors where necessary to support unique application requirements. Automation and Infrastructure as Code (IaC): Drive operational efficiency and scalability using Terraform and AWS Lambda for secrets automation and lifecycle management. Automate Vault provisioning, configuration, and secret injection using CI/CD pipelines. Develop and maintain reusable Infrastructure as Code templates and modules. Scripting and Development: Design robust scripting solutions using PowerShell, Python, and other relevant scripting languages. Automate secrets rotation, onboarding, and offboarding processes. Build integrations and custom tooling for DevOps pipelines and secrets orchestration. Operational Support: Actively participate in on-call rotations and provide weekend support during production deployments or critical issue resolution. Troubleshoot and resolve Vault access issues, integration failures, and performance bottlenecks. Document architecture, processes, and best practices. Required Skills & Qualifications Minimum 5 years of hands-on experience with HashiCorp Vault in an enterprise environment. Strong expertise in secure secrets management, policy enforcement, and access control. Proficiency in PowerShell, Python, Bash, or other scripting languages. Demonstrated experience with Terraform, AWS Lambda, and DevOps automation practices. Knowledge of system security and compliance principles (e.g., secrets rotation, audit logging). Experience with cloud platforms like AWS, Azure, or GCP. Understanding of DevOps CI/CD pipelines and integration with Vault for dynamic secrets management. Preferred Skills (Nice To Have) Experience with CyberArk or BeyondTrust PAM solutions. Familiarity with Kubernetes and Vault Agent Injector. Understanding of enterprise identity providers (LDAP, Azure AD, Okta) and their integration with Vault. Exposure to containerization and microservices security. Behavioral Competencies Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Self-motivated with a proactive approach to learning and implementation. Ability to work in a fast-paced, high-stakes environment with shifting priorities. Certifications (Preferred But Not Mandatory) HashiCorp Vault Associate Certification AWS Certified Solutions Architect or DevOps Engineer Simeio is an equal opportunity employer. If you require assistance with completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to any of the recruitment team at recruitment@simeio.com or +1 404-882-3700. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the company We provide full stack IoT traceability solution using custom smart labels and ultra-low power devices. We use cutting-edge technologies to enable end to end supply chain digitization. We at the forefront of revolutionizing supply chain, warehouse, and inventory management solutions by providing real-time visibility into assets and shipments. Our dedicated team collaborates closely with the Product team to architect and uphold cutting-edge technologies that power our core platform, customer-facing API’s, and real-time events processing tailored specifically for the challenges in the supply chain industry. We tackle compelling technical hurdles, working with data from our fleet of IoT devices and sensors to provide real-time visibility. We foster a data-centric mindset, ensuring that exceptional ideas are welcomed and considered, regardless of the source. Responsibilities Collaborate with the Product team to design, develop, and maintain robust cloud native enterprise scale web applications Design and build efficient and re-usable web components and widgets using React, React Native or similar technologies. Collaborate with the API / Microservices development team and leverage the APIs in web application building Build user intuitive and friendly interfaces. Build secure code to prevent any vulnerabilities Mentor and guide junior members as needed. Requirements Past experience developing world class web UI applications and re-usable components Strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, and agile development methodologies. You are able to utilize your knowledge and expertise to code and ship quality products in a timely manner. You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise. You are an excellent collaborator & communicator. You know that start-ups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. You are excited by the prospect of working in a distributed team and company, working with teammates across the globe. Qualifications Bachelor’s or Master’s degree in Computer science or equivalent. 4+ years of experience in developing scalable enterprise web front end applications Proficient with JavaScript, React, HTML and CSS Nice To Haves Experience working with AWS or GCP Experience working with containerization technologies (Docker, Kubernetes) Experience developing products in supply chain management and inventory management is a plus. Experience developing mobile apps Show more Show less

Posted 1 day ago

Apply

12.0 - 17.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role In this role as a Product Manager-Data Management , you will: Develop and execute a comprehensive strategy for 3rd party data platform adoption and expansion across the organization, with a focus on driving business outcomes and improving marketing effectiveness. Collaborate with marketing teams to integrate 3rd party data into their campaigns and workflows and provide training and support to ensure effective use of the data. Develop and showcase compelling use cases that demonstrate the value of 3rd party data in improving marketing effectiveness and measure the success of these use cases through metrics such as adoption rate, data quality, and marketing ROI. Develop and maintain a roadmap for 3rd party data platform adoption and expansion across the organization, with a focus on expanding use cases and applications for 3rd party data and developing new data-driven products and services. Monitor and measure the effectiveness of 3rd party data in driving business outcomes, and adjust the adoption strategy accordingly Work with cross-functional teams to ensure data quality and governance, and develop and maintain relationships with 3rd party data vendors to ensure seamless data integration and delivery. Drive the development of new data-driven products and services that leverage 3rd party data, and collaborate with stakeholders to prioritize and develop these products and services. Shift Timings: 2 PM to 11 PM (IST). Work from office for 2 days in a week (Mandatory). About You Youre a fit for the role of Product Manager - Data Management, if your background includes: 12+ years of experience in data management, product management, or a related field. Bachelor's or Master's degree in Computer Science, Data Science, Information Technology, or a related field. Experience with data management tools such as data warehousing, ETL (Extract, Transform, Load), data governance, and data quality. Understanding of the Marketing domain and data platforms such as Treasure Data, Salesforce, Eloqua, 6Sense, Alteryx, Tableau and Snowflake within a MarTech stack. Experience with machine learning and AI frameworks (e.g., TensorFlow, PyTorch). Expertise in SQL and Alteryx. Experience with data integration tools and technologies such as APIs, data pipelines, and data virtualization. Experience with data quality and validation tools and techniques such as data profiling, data cleansing, and data validation. Strong understanding of data modeling concepts, data architecture, and data governance. Excellent communication and collaboration skills. Ability to drive adoption and expansion of D&B data across the organization. Certifications in data management, data governance, or data science is nice to have. Experience with cloud-based data platforms (e.g., AWS, GCP, Azure) nice to have. Knowledge of machine learning and AI concepts, including supervised and unsupervised learning, neural networks, and deep learning nice to have.

Posted 1 day ago

Apply

14.0 - 19.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Position Summary This position manages the activities of systems development, applications development, test strategies and quality assurance functions for system enhancements and new products. Key responsibilities are to develop and manage people, provide technical leadership, lead project planning, facilitate communication, and offer product vision. Coordinates project timelines with Project and Development Managers, determines and obtains resources, assigns work, monitors progress and results, and provides technical leadership. This Manager is a champion for product quality within the department and is accountable for an assessment of product readiness and commitments on product delivery schedules. This is a first-level management position. Primary Responsibilities Employee management including but not limited to sourcing, interviewing and hiring candidates for open positions,onboarding, establishing goals, assigning or delegating work, providing on-the-job training, giving guidance to staff, conducting performance evaluations, approving paid time-off (PTO), developing performance improvement plans, and taking disciplinary action. Recommends changes to policies and establishes procedures that affect immediate organization(s). Act as an advisor to subordinate(s) to meet schedules and/or resolve technical problems. Ensures milestones are being met; monitor, track and make visible Develops and administers schedules, performance requirements; provides input into budgeting. May meet with customers to communicate and review product features May communicates product roadmaps and project status to staff, senior management, and other product teams. Evaluates and reviews new technologies on their applicability to product architecture and design. Prioritizes product features resulting in the correct delivery of needed functionality. Coordinates with development service groups resulting in greater communication and higher probability of on time delivery of products. Responsible for upholding F5s Business Code of Ethics and for promptly reporting violations of the Code or other company policies. Performs other related duties as assigned. Evaluate and solve software failures Improve the existing functionality Work cross functionally integrating, testing and debugging issues with existing system wide software Collaborate with team members and technical leads Build tools and infrastructure to improve F5s components and features Perform other related duties as assigned Knowledge, Skills and Abilities Essential: Excellent analytic trouble-shooting and debugging skills Demonstrated excellence in written and verbal communications Programming efficiency Strong networking fundamentals and experience dealing with different layers of the networking stack. Experience with network and web technologies such as TCP, UDP, IP, HTTP, L4-L7, DNS and such SRE/Devops on Linux & Kubernetes: Demonstrate excellent, hands-on knowledge of deploying workloads and managing lifecyle on kubernetes, with practical experience on debugging issues. On-call Experience in managing everyday OPs for production environments. Experience in production alerts management and using dashboards to debug issues. Cloud Infrastructure: Prior experience in deploying workloads and managing lifecycle on any cloud provider (AWS/GCP/Azure) Knowledge and expertise in software engineering methodologies. Demonstrated ability to lead technical teams Good working experience in Cloud based product development. Good knowledge of microservices architecture and API design and development best practices Working knowledge of development and deployment across multiple cloud providers such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, VMWare and OpenStack Knowledge or experience with Docker containers and orchestration platforms such as Kubernetes Able to collaborate and thrive in a dynamic environment Passion for learning new technologies, and a track record of doing so Track record of mentoring engineering staff Proven ability to deliver products with highest quality, on time and within budget. Demonstrated ability in mentoring and developing direct reports. Extensive experience with bug tracking and triage systems Excellent interpersonal and communication skills. Demonstrated excellence in all written communications. Duties may require being on call periodically or working outside normal working hours (evenings and weekends). Duties may require the ability to travel via automobile or airplane, approximately 10% of the time spent traveling. Nice-to-have: Experience programming in Linux networking and OS internals Agile based software development methodologies such as Kanban, Scrum GipOps: Experience with helm charts/customizations and gitops tools like ArgoCD/FluxCD. Experience with Disaster Recovery and Migration is a plus Qualifications Typically requires a minimum of 14 years of related experience with a Bachelors degree; or 12 years and a Masters degree; or a PhD with 10 year of experience; or equivalent work experience. Environment Empowered Work Culture: Experience an environment that values autonomy, fostering a culture where creativity and ownership are encouraged. Continuous Learning: Benefit from the mentorship of experienced professionals with solid backgrounds across diverse domains, supporting your professional growth. Team Cohesion: Join a collaborative and supportive team where you'll feel at home from day one, contributing to a positive and inspiring workplace.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - Automation Engineer In this role, you will be responsible for design, develop, and implement automated systems and processes to optimize production and operational efficiency. This role requires a strong technical background in automation technologies, excellent problem-solving skills, and the ability to work collaboratively with cross-functional teams. Responsibilities Design and develop automated systems for manufacturing, production, and other operational processes. Design and develop automated systems for manufacturing, production, and other operational processes. Develop, test, and maintain automation scripts and tools for infrastructure provisioning, configuration, and management. Implement Infrastructure as Code (IaC) using tools like Terraform, Ansible, or similar technologies. Assist in automating cloud infrastructure tasks across AWS, Azure, or GCP environments. Collaborate with L3 engineers to implement automation solutions that enhance infrastructure scalability, reliability, and security. Monitor and troubleshoot automated processes, identifying and resolving issues as they arise. Participate in on-call rotations to provide support for automated infrastructure tasks and incidents. Contribute to the continuous improvement of automation processes, identifying opportunities for further automation and optimization. Assist in maintaining and updating documentation related to automated processes and infrastructure configurations. Stay up-to-date with industry trends and best practices in infrastructure automation and related technologies. Work closely with DevOps, IT operations, and development teams to understand automation requirements and implement solutions accordingly. Provide feedback and suggestions to improve automation practices and infrastructure management strategies. Communicate effectively with team members and stakeholders to ensure alignment on automation initiatives. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Good experience in automation engineering or a related role. Good experience in IT infrastructure, with automation. Hands-on experience with automation tools like Terraform, Ansible, Puppet, or Chef. Experience with cloud platforms such as AWS, Azure, or GCP. Preferred Qualifications/ Skills Certifications in relevant technologies (e.g., AWS Certified Solutions Architect, Azure Administrator, Terraform Associate). Experience with hybrid cloud environments. Familiarity with configuration management and monitoring tools. Proficiency in scripting languages such as Python, Bash, or PowerShell. Knowledge of Infrastructure as Code (IaC) principles and practices. Familiarity with CI/CD tools like Jenkins, GitLab CI, or similar. Basic understanding of containerization tools like Docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com Follow us on X, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 17, 2025, 5:59:54 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 1 day ago

Apply

8.0 years

3 Lacs

Hyderābād

Remote

GlassDoor logo

Category Engineering Hire Type Employee Job ID 10606 Remote Eligible No Date Posted 15/06/2025 Job Summary: Synopsys’ Generative AI Center of Excellence defines the technology strategy to advance applications of Generative AI across the company. The GenAI COE pioneers the core technologies – platforms, processes, data, and foundation models – to enable generative AI solutions, and partners with business groups and corporate functions to advance AI-focused roadmaps. We are seeking a highly skilled and experienced Staff AI Engineer to join our dynamic and innovative team. As a Sr Staff AI Engineer, you will play a critical role in designing, developing, and deploying advanced AI and machine learning solutions. You will collaborate with cross-functional teams to drive AI initiatives and contribute to the development of cutting-edge technologies that enhance our products and services. This role demands a deep understanding of generative AI algorithms, strong programming skills, and the ability to lead and mentor junior engineers. Key Responsibilities: AI/ML Solution Development: Design, develop, and deploy AI and machine learning models and algorithms to solve complex business problems. Technical Leadership: Provide technical leadership and mentorship to junior engineers and data scientists, guiding them in best practices and advanced techniques. Research & Innovation: Stay up to date with the latest advancements in AI and machine learning technologies and them to improve existing systems or develop new solutions. Collaboration: Work closely with product managers, software engineers, and other stakeholders to define project requirements, create technical specifications, and ensure successful implementation of AI solutions. Data Analysis & Preprocessing: Perform data analysis, data preprocessing, and feature engineering to prepare datasets for machine learning models. Model Training & Evaluation: Train, validate, and fine-tune machine learning models, ensuring they meet performance and accuracy requirements. Deployment & Monitoring: Deploy AI models into production environments, and monitor their performance, making adjustments as necessary to maintain optimal operation. Documentation: Document AI models, algorithms, and methodologies to ensure reproducibility and knowledge sharing within the team. Compliance & Ethics: Ensure AI solutions adhere to ethical guidelines, data privacy regulations, and industry standards. Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Electrical Engineering, or a related field. PhD is a plus. Experience: Minimum of 8 years of experience in AI and machine learning, with a proven track record of deploying AI solutions in a production environment. Experience in designing and managing scalable platforms for AI/ML solutions. Technical Skills: Strong proficiency in programming languages such as Python, or C++. Extensive experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of statistical analysis, data mining, and data visualization techniques. Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., Git) and software development methodologies (e.g., Agile, Scrum). Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work independently and as part of a team. Proven leadership and mentorship abilities. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. Inclusion and Diversity : Synopsys considers all applicants for employment without regard to race, color, religion, sex, gender preference, national origin, age, disability, or status as a Covered Veteran in accordance with federal law. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 1 day ago

Apply

Exploring GCP Jobs in India

The job market for Google Cloud Platform (GCP) professionals in India is rapidly growing as more and more companies are moving towards cloud-based solutions. GCP offers a wide range of services and tools that help businesses in managing their infrastructure, data, and applications in the cloud. This has created a high demand for skilled professionals who can work with GCP effectively.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for GCP professionals in India varies based on experience and job role. Entry-level positions can expect a salary range of INR 5-8 lakhs per annum, while experienced professionals can earn anywhere from INR 12-25 lakhs per annum.

Career Path

Typically, a career in GCP progresses from a Junior Developer to a Senior Developer, then to a Tech Lead position. As professionals gain more experience and expertise in GCP, they can move into roles such as Cloud Architect, Cloud Consultant, or Cloud Engineer.

Related Skills

In addition to GCP, professionals in this field are often expected to have skills in: - Cloud computing concepts - Programming languages such as Python, Java, or Go - DevOps tools and practices - Networking and security concepts - Data analytics and machine learning

Interview Questions

  • What is Google Cloud Platform and its key services? (basic)
  • Explain the difference between Google Cloud Storage and Google Cloud Bigtable. (medium)
  • How would you optimize costs in Google Cloud Platform? (medium)
  • Describe a project where you implemented CI/CD pipelines in GCP. (advanced)
  • How does Google Cloud Pub/Sub work and when would you use it? (medium)
  • What is Cloud Spanner and how is it different from other database services in GCP? (advanced)
  • Explain the concept of IAM and how it is implemented in GCP. (medium)
  • How would you securely transfer data between different regions in GCP? (advanced)
  • What is Google Kubernetes Engine (GKE) and how does it simplify container management? (medium)
  • Describe a scenario where you used Google Cloud Functions in a project. (advanced)
  • How do you monitor performance and troubleshoot issues in GCP? (medium)
  • What is Google Cloud SQL and when would you choose it over other database options? (medium)
  • Explain the concept of VPC (Virtual Private Cloud) in GCP. (basic)
  • How do you ensure data security and compliance in GCP? (medium)
  • Describe a project where you integrated Google Cloud AI services. (advanced)
  • What is the difference between Google Cloud CDN and Google Cloud Load Balancing? (medium)
  • How do you handle disaster recovery and backups in GCP? (medium)
  • Explain the concept of auto-scaling in GCP and when it is useful. (medium)
  • How would you set up a multi-region deployment in GCP for high availability? (advanced)
  • Describe a project where you used Google Cloud Dataflow for data processing. (advanced)
  • What are the best practices for optimizing performance in Google Cloud Platform? (medium)
  • How do you manage access control and permissions in GCP? (medium)
  • Explain the concept of serverless computing and how it is implemented in GCP. (medium)
  • What is the difference between Google Cloud Identity and Access Management (IAM) and AWS IAM? (advanced)
  • How do you ensure data encryption at rest and in transit in GCP? (medium)

Closing Remark

As the demand for GCP professionals continues to rise in India, now is the perfect time to upskill and pursue a career in this field. By mastering GCP and related skills, you can unlock numerous opportunities and build a successful career in cloud computing. Prepare well, showcase your expertise confidently, and land your dream job in the thriving GCP job market in India.

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies