Jobs
Interviews

43887 Gcp Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Kognitive Networks is revolutionizing network management by bringing Software-Defined Wide Area Networking (SDWAN) and Secure Access Service Edge (SASE) technologies to a broad array of industries and use cases. Targeting enterprises that have many locations and require seamless communication, Kognitive Networks provides a software-first, wireless-aware approach to optimizing connectivity across multiple networks, including LEO/GEO satellites and multi-carrier 4G/5G cellular networks. The integrated security features, connectivity controls, and unified system management enables enterprises to take advantage of the evolving wireless landscape to rapidly scale their business and network operations while reducing operating and technology expenses. Job Description: Technical Manager/ Senior Technical Lead (10-12 Years Experience) Position : Technical Manager / Senior Technical Lead Experience Level : 10-12 Years Location : Chennai/Bangalore Employment Type : Full-time Role Overview We are seeking a seasoned Technical Manager/ Senior Technical Lead to lead and drive the development of cutting-edge software solutions. The ideal candidate will have expertise in Golang , Node.js , and TypeScript , coupled with strong experience in Kubernetes , Docker , and CI/CD pipeline management. This role involves architectural planning , tech stack selection , and team leadership , ensuring the successful delivery of high-quality software products. Key Responsibilities Technical Leadership: Oversee the end-to-end architecture design of applications and ensure alignment with business goals. Lead the tech stack selection process, considering scalability, performance, and cost-efficiency. Define and enforce best practices in coding, architecture, and deployment processes. Ensure seamless integration and deployment using CI/CD pipelines. Team Management Manage and mentor a team of developers, providing technical guidance and fostering a collaborative environment. Conduct code reviews and ensure adherence to coding standards. Plan and allocate tasks to team members, balancing workloads and ensuring timely delivery. Drive team upskilling initiatives, focusing on emerging technologies and tools. Architecture And Development Design and implement scalable and resilient microservices architectures using Golang and Node.js. Write clean, efficient, and maintainable code in TypeScript for both front-end and back-end applications. Collaborate with DevOps teams to optimize containerized deployments using Docker and Kubernetes. Ensure high availability and fault tolerance of applications through effective architectural planning. Process Management Build and maintain robust CI/CD pipelines to automate build, test, and deployment processes. Monitor application performance and address bottlenecks proactively. Lead the technical risk assessment for new projects and deployments. Work closely with stakeholders to gather requirements and translate them into technical solutions. Required Skills Technical Expertise: Programming Languages: Proficiency in Golang, Node.js, and TypeScript. DevOps Tools: Hands-on experience with Kubernetes, Docker, and CI/CD tools (e.g., Jenkins, GitLab CI/CD, Azure DevOps). Architecture: Strong knowledge of microservices architecture, RESTful APIs, and distributed systems. Cloud Platforms: Experience with AWS, Google Cloud Platform (GCP), or Azure. Databases: Familiarity with both SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Redis) databases. Working knowledge of networking domain is an added advantage Management And Leadership Proven experience in leading and managing development teams. Strong communication and collaboration skills to work with cross-functional teams. Ability to handle multiple projects and prioritize tasks effectively. Soft Skills Problem-solving mindset with the ability to make quick decisions under pressure. Strong attention to detail and focus on delivering high-quality solutions. Ability to mentor and coach team members, fostering growth and development. Preferred Qualifications Experience in implementing serverless architectures. Knowledge of observability tools like Prometheus, Grafana, or Datadog. Prior experience in scaling teams and systems in a fast-paced environment. Education Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. Key Responsibilities Snapshot Tech Stack: Golang, Node.js, TypeScript Docker, Kubernetes, CI/CD Pipelines Leadership: Manage and mentor development teams Conduct architectural reviews and planning Strategic: Tech stack selection Long-term architectural planning Risk assessment and mitigation Kindly share your updated resume to kalivaradhan.gopalakrishnan@kognitive.net or WhatsApp - +91 86101 60445 #techmanager#techlead #cutting-edge software #Golang #Node.js #TypeScript #Kubernetes #Docker #CI/CD pipeline management #Techstackselection #architecturalplanning

Posted 3 days ago

Apply

3.0 years

0 Lacs

greater kolkata area

Remote

Job Role: System & DevOps Engineer Overview Pitangent Group of Companies is an ISO:2015 Certified, CMMIL-3, Award winning Software Development Company in Eastern India. It caters to areas like AI/ML to Web development to SAAS engneering. Job Description We are seeking a skilled System & DevOps Engineer to join our team and play a vital role in managing infrastructure, automating processes, and supporting development environments. Key Responsibilities Infrastructure Management: Design, deploy, and manage scalable, secure, and cost-effective cloud infrastructure on AWS, Azure, and GCP. Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to automate deployment and testing processes. Utilize Docker for containerization and Kubernetes for orchestration to manage application deployments. Automation and Configuration Management: Write and maintain Infrastructure as Code (IaC) using Terraform or equivalent tools. Automate infrastructure provisioning and management tasks using shell scripts and Python. Develop and manage configuration management systems using Ansible, Chef, or Puppet. System Monitoring and Maintenance: Set up and manage monitoring, alerting, and logging systems to ensure high availability and performance. Perform regular system updates, patches, and performance tuning. Troubleshoot and resolve issues related to infrastructure, network, and application performance. Development Environment Setup: Collaborate with development teams to set up and manage local and remote development environments. Ensure that development environments mirror production environments as closely as possible. Provide guidance and support to developers on best practices for environment management and automation. Security and Compliance: Implement security best practices for cloud infrastructure, including access control, data encryption, and network security. Ensure compliance with industry standards and regulations, such as GDPR, HIPAA, or PCI-DSS. Conduct regular security audits and vulnerability assessments. Collaboration and Mentoring: Work closely with development, QA, and product teams to ensure smooth and reliable operation of software and systems. Mentor junior engineers and provide technical leadership within the team. Document processes, policies, and procedures related to infrastructure and DevOps practices. Eligibility Criteria Educational Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. Experience 3+ years of experience in Systems Engineering, DevOps, or a related role. Proven experience with cloud platforms such as AWS, Azure, and GCP. Extensive experience with CI/CD tools like Jenkins, GitLab CI, CircleCI, etc. Strong background in containerization and orchestration tools like Docker and Kubernetes. Hands-on experience with Infrastructure as Code (IaC) using Terraform or equivalent tools. Proficiency in scripting languages such as Shell scripting and Python. Solid understanding of SQL and experience with relational databases. Technical Skills Strong knowledge of Linux/Unix systems and system administration. Experience in setting up and maintaining development environments. Familiarity with version control systems, particularly Git. Strong understanding of networking concepts, security, and best practices in cloud infrastructure. Experience with monitoring tools such as Prometheus, Grafana, or equivalent. Soft Skills Excellent problem-solving and troubleshooting skills. Strong communication skills, both written and verbal. Ability to work independently as well as collaboratively in a team environment. Strong organizational skills with the ability to manage multiple tasks and projects. Employment Details Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹35,000.00 per month Benefits & Perks Paid sick time Paid time off Provident Fund Work from home Preferred Experience DevOps Project: 1 year (Preferred) Linux System Administration: 3 years (Required) Skills: linux,terraform,shell scripting,linux system administration,kubernetes,infrastructure,ci/cd,networking,ansible,azure,grafana,communication,git,prometheus,gitlab ci,aws,jenkins,puppet,python,circleci,security,docker,cloud infrastructure,code iac,gcp,sql,devops,chef

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

pune

Hybrid

Job Title: AI Platform Architect Location: Pune (Hybrid) Experience Required: 10+ Years Functional Area: IT Software AI/ML, Architecture & Platform Engineering Education: B.Tech/B.E. in CS/IT or related field; Masters degree preferred Job Description We are hiring an experienced AI Platform Architect to lead the design and evolution of its AI/ML platform. The role focuses on enabling AI delivery at scale across cloud and edge, driving efficiency, security, and agility throughout the ML lifecycle. Roles & Responsibilities Define and maintain the reference architecture for an enterprise AI/ML platform, including data pipelines, model training, CI/CD, and observability. Build reusable platform services like model registries, training environments, orchestration frameworks, and experiment tracking systems. Develop self-service interfaces (API/CLI/UI) for scalable, secure platform consumption by engineering teams. Collaborate across AI/ML teams, enterprise architects, and product engineering to ensure platform integration and adoption. Design deployment strategies for cloud, internal APIs, dashboards, and embedded edge systems . Implement governance and technical standards for lifecycle management, performance, and reusability. Serve as a technical leader , contributing to platform strategy, roadmap, and architecture best practices. Key Skills Required 10+ years in Software / ML Platform / Infrastructure Engineering 3+ years in AI Platform or MLOps Architecture roles Strong hands-on experience with GCP cloud-native tools : Vertex AI, GKE, Cloud Run, Artifact Registry, Pub/Sub Experience with CI/CD for ML (e.g., GitHub Actions, Kubeflow, Terraform) Strong skills in Kubernetes, Docker , and secure container management Deep understanding of model lifecycle management (training, versioning, deployment, monitoring) Familiarity with tools like Airflow , feature stores , dataset management Excellent architectural and system design capabilities Proven ability to work across multiple teams and influence technical decisions

Posted 3 days ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

pune, bengaluru, delhi / ncr

Hybrid

Cloud Security Engineer II Zscale Shift: Rotational, 24*7 Location : Delhi NCR(Noida And Gurugram), Bangalore, Pune, Mumbai,Hyderabad, Trivandrum t Experience with Zscaler is a must. Ability to work independently in implementing and handling Zscaler Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20on Fortune’s World'sBest Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally About the role As a Cloud Security Engineer II, you will be providing Security L1/L2/L3/Engineering support for Identity, Network, App Security, Email Security based on Microsoft, Zscaler, Cisco and other ISV Tools following the cloud security model that provides organizations with a range of security solutions and services. We will count on you to help organizations protect their networks, systems, and data from a variety of security threats, such as cyberattacks, data breaches, and unauthorized access. Along the way, you will get to: Analyze logs and reports to identify and resolve connectivity, performance, and security issues. Assist in the deployment and configuration of Zscaler SIPA related solution Be responsible for monitoring, management, and optimization of Security Services within client’s environment\ Handle the responsibilities including but not limited to Continuous Monitoring, Email Security, Antivirus Management, Spam Filtering, IAM/PAM, Intrusion Protection, Security Assessment, Network Security, SIEM/SOAR, App Security What we’re looking for B.E/B.Tech/Graduate Exp in Any cloud Minimum 2-3 years’ experience and hands on in Zscaler (ZIA, ZPA, ZDX) Must possess a basic understanding of Routing and Switching. Should have a clear understanding of the architecture and traffic flow for ZIA (Zscaler Internet Access) and ZPA (Zscaler Private Access). Should be familiar with SSL handshake, SSL Inspection, and have experience in configuring SSL Inspection policies on ZIA. Experience in configuring locations with GRE (Generic Routing Encapsulation) and IPSec tunnels is essential. Experience in supporting SD-WAN integrated sites, including handling SSL inspection bypass configurations and resolving access issues for mobile and remote users. Proficiency in analyzing ZDX telemetry to identify end-user experience issues. Exposure to working in a ticket-driven environment (e.g., ServiceNow) with strong documentation and communication skills for internal and external stakeholders Should have a strong understanding of PAC file modifications. Hands on secure CRT, Putty and fiddler tool for log analysis. Should have exceptional problem-solving skills, identifying and isolating issues following established processes and obtaining approvals for resolutions. Should have strong understanding of ZIA policies to enhance simplicity and reduce complexity. Should have knowledge and experience in troubleshooting ZPA settings, designed App Segments and Access policies to enhance security. knowledge of writing detections based on Network, Host, OS and other relevant logs. Experienced in configuration and logs from various advanced security tools. Basic Troubleshooting skills on Firewalls.

Posted 3 days ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

pune

Hybrid

About the Team Our Platform Operations team builds and maintains the cloud infrastructure that powers Madhives platform. We focus on reliability, scalability, security, and cost efficiency. The team embraces a culture of learning and knowledge-sharing, invests in automation and observability, and partners closely with engineering teams to make it easier and faster to deliver software at scale. What You'll Do Improve the reliability and stability of Madhives cloud services, operating primarily in a mix of AWS and Google Cloud, with more of the latter. Design, build, and maintain CI/CD tooling for Infrastructure as Code and internal services (GitHub Workflows, CloudBuild). Develop and support monitoring, alerting, and observability systems to ensure platform health. Automate deployment and management of cloud infrastructure using Terraform, Helm, and other IaC tooling. Administer, monitor, and optimize databases to ensure performance, reliability, and availability. Implement database backup, recovery, and scaling strategies to support large-scale distributed systems. Enforce cloud security best practices (IAM, permissions, policies). Identify opportunities to optimize cloud services and databases for efficiency and cost control. Collaborate with cross-functional guilds to establish operational standards and reduce risk. Stay current on emerging cloud and database technologies, evaluating for potential adoption. Who You Are Strong understanding of cloud infrastructure, networking, containerization, and distributed systems (GCP preferred, AWS/Azure a plus). Hands-on experience with Infrastructure as Code (Terraform or similar). Proficiency with Bash and command-line utilities; Golang experience required (PHP/Python/JavaScript nice to have). Experience with containerization and orchestration (Docker, Kubernetes). Solid background in Database Administration: provisioning, scaling, tuning, monitoring, backup/recovery, and troubleshooting. Familiarity with database performance optimization and observability tools. Experience with monitoring systems (Google Cloud Monitoring Suite, Datadog, Cloudwatch, etc.). Strong troubleshooting and problem-solving skills, with a systematic approach. Excellent written and verbal communication skills; able to document and share best practices. Comfortable in a fast-paced environment with a growth mindset and eagerness to learn.

Posted 3 days ago

Apply

4.0 years

0 Lacs

kolkata, west bengal, india

Remote

Hybrid Position Level AI Specialist/Software engineering professional (Can’t be a junior/fresher, needs to be middle to senior level person Work Experience Ideally, 4 to 5 years of working as a Data Scientist / Machine Learning and AI at a managerial position (end-to-end project responsibility). Slightly lower work experience can be considered based on the skill level of the candidate About The Job Use AI-ML to work with data to predict process behaviors. Stay abreast of industry trends, emerging technologies, and best practices in data science, and provide recommendations for adopting innovative approaches within the product teams. In addition, championing a data-driven culture, promoting best practices, knowledge sharing, and collaborative problem-solving Abilities Knowledge about data analysis, Artificial Intelligence (AI), Machine Learning (ML), and preparation of test reports to show results of tests. Strong in communication with a collaborative attitude, not afraid to take responsibility and make decisions, open to new learning, and adapt. Experience with end-to-end process and used to make result presentation to customers. Technical Requirements Experience working with real world messy data (time series, sensors, etc. Familiarity with Machine learning and statistical modelling Ability to interpret model results in business context Knowledge of Data preprocessing (feature engineering, outlier handling, etc.) Soft Skill Requirements Analytical thinking – Ability to connect results to business or process understanding Communication skills – Comfortable explaining complex topics to stakeholders Structured problem solving – Able to define and execute a structured way to reach result Autonomous working style – can drive a small project or parts of a project Tool Knowledge Programming: Python (Common core libraries: pandas, numpy, scikit-learn, matplotlib, mlfow etc.); Knowledge of best practices (PEP8, code structure, testing, etc.) Code versioning (GIT) Data Handling: SQL; Understanding of data format (CSV, JSON, Parquet); Familiarity with time series data handling Infrastructure: Basic Cloud technology knowledge (Azure (preferred), AWS, GCP); Basic Knowledge of MLOps workflow Good to have: Knowledge of Azure ML, AWS SageMaker; Knowledge of MLOps best practices in any tool; Containerization and deployment (Docker, Kubernetes) Languages English – Proficient/Fluent Location: Hybrid (WFO+WFH) + Availability to visit customer sites for meetings and work-related responsibilities as per the project requireme

Posted 3 days ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

Remote

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP Basis Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of enhancements and maintenance tasks, while also focusing on the development of new features to meet client needs. You will be responsible for troubleshooting issues and providing solutions, ensuring that the applications function optimally and meet the required standards of quality and performance. Your role will also include documenting your work and participating in team discussions to share insights and best practices, contributing to a culture of continuous improvement and innovation. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of processes and procedures to enhance team knowledge. - Engage in code reviews to ensure adherence to best practices and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Basis Administration. - Installation & Upgrade of SAP NetWeaver/ NonNetWeaver Products ( ABAP/ JAVA/ Solman/BO/DS) - System maintenance activities and troubleshooting - Client Administration, Local Client Copy, Remote Client Copy, & Client Export-Import. - Deep understanding SAP System Architecture - OS(Linux) File system management - Knowledge in HA/DR concepts - Kernel Upgrade, Add On Installation/ Upgrade - Certificate install/update in NetWeaver and Non NetWeaver Products - Homogeneous System Copy - ABAP / Java System Export via SWPM - SAP HANA, SYBASE ASE Administration - BOBJ/BODS/WebDispatcher/ CPI-DS / Cloud Connector and OpenText administration - Experience in cloud hosted applications (Azure, AWS, GCP) Additional Information: - The candidate should have minimum 3 years of experience in SAP Basis Administration. - This position is based at our Hyderabad office. - A 15 years full time education is required.

Posted 3 days ago

Apply

12.0 - 18.0 years

18 - 23 Lacs

pune

Remote

Company: Trinamix Inc. Position: GenAI Lead Experience: 12+ Years Location: Remote Employment Type: Full-Time About Us Trinamix Inc. is a leading global Oracle implementation and technology partner, recognized for driving digital transformation through innovative solutions in Supply Chain, Finance, Manufacturing, and emerging technologies. As we expand our portfolio, we are building a strong focus on Generative AI (GenAI) to create next-gen solutions for our clients worldwide. Role Overview We are seeking a highly experienced GenAI Lead to spearhead the design, development, and delivery of Generative AI solutions . This role requires a mix of strong technical expertise, leadership ability, and business acumen. The ideal candidate will lead a team of data scientists and engineers, collaborate with stakeholders to identify impactful use cases, and drive the successful deployment of AI models into production. Key Responsibilities Lead the end-to-end design and development of Generative AI models, applications, and frameworks . Collaborate with cross-functional teams and stakeholders to identify business use cases and define AI strategies. Manage and mentor a high-performing team of AI/ML engineers and data scientists. Oversee deployment of AI models in production and monitor performance, scalability, and security. Drive innovation and research in cutting-edge AI/ML technologies to enhance Trinamix’s AI offerings. Define and implement best practices for AI development, testing, and deployment. Partner with leadership and delivery teams to ensure AI solutions align with business objectives. Act as a thought leader for GenAI adoption, internally and externally, through innovation showcases and client interactions. Skills & Qualifications 12+ years of overall experience with strong expertise in AI/ML, deep learning, and Generative AI . Hands-on experience with AI/ML frameworks such as TensorFlow, PyTorch, Keras. Strong programming skills in Python (R/Java is a plus). Knowledge of data processing/analytics tools like Pandas, NumPy, SQL. Proven expertise in cloud platforms (AWS, Azure, Google Cloud) for AI/ML model deployment. Demonstrated ability to lead AI delivery teams and manage complex projects. Excellent problem-solving, critical thinking, and stakeholder management skills. Strong communication and interpersonal skills to work with both technical and business stakeholders. Why Join Trinamix? Lead AI-driven innovation in a global consulting environment. Collaborate with world-class clients and industry leaders. Opportunity to shape next-gen AI products and solutions . Be part of a supportive, forward-thinking, and innovative workplace culture.

Posted 3 days ago

Apply

7.0 - 11.0 years

30 - 45 Lacs

bengaluru

Hybrid

About the Role Were looking for a Staff Backend Engineer (7+ years) to architect and lead backend systems that power our core AI data platforms. You’ll own large-scale distributed systems, drive backend design strategy, and work across product, infrastructure, and AI teams to deliver resilient, high-performance services. This is a high-impact IC role for engineers who enjoy solving complex systems problems, mentoring teams, and influencing architecture across multiple domains. What You’ll Do Architect and scale backend services that support large-scale data ingestion, transformation, workflow orchestration, and secure delivery pipelines. Lead system design and technical roadmapping across projects, ensuring platforms are resilient, extensible, and aligned with long-term goals. Mentor senior and mid-level engineers , leading by example in code quality, documentation, and system thinking. Own backend performance and reliability , introducing metrics-driven approaches to scale systems efficiently and reduce operational overhead. Collaborate across disciplines — AI researchers, frontend, data, DevOps, and product teams — to design APIs and services with clear boundaries and SLAs. Champion platform evolution , identifying and driving initiatives around observability, developer tooling, scalability, and security. Influence technical direction across engineering by leading design reviews, promoting best practices, and contributing to internal engineering standards. Minimum Qualifications 7+ years of professional experience in backend software engineering, with at least 1+ years operating at a staff level or equivalent. Proven experience designing and scaling backend systems in production — with a strong grasp of distributed systems, microservices, and service-oriented architecture. Advanced proficiency in one or more backend languages from Go, Java, and Python. Good understanding of cloud-native architectures (AWS, GCP, or Azure), containerization (Docker), and orchestration (Kubernetes). Deep knowledge of data modeling, API design (REST/GraphQL/RPC), caching, SDKs, developer tooling and asynchronous systems (queues, event buses). Strong understanding of servers, database metrics, and observability. Experience implementing secure, observable, and testable systems. Familiarity with CI/CD pipelines, and deployment tooling (GitHub workflows, CircleCI, etc.). Excellent communication and collaboration skills; able to drive alignment across teams. Preferred Qualifications Experience working on data-intensive SaaS platforms, scalable backend services, or high-throughput data processing systems. Familiarity with event-driven and domain-driven design. Deep understanding of data structures, algorithms, system design, and architectural patterns. Background in leading technical initiatives across multiple teams and contributing to org-level architecture. Strong bias for action and ownership; comfortable making architectural decisions with long-term impact. Experience in mentoring up to SDE 3 engineers and participating in technical hiring loops. What You’ll Love / Benefits Opportunity to build backend systems that are foundational to the future of AI/GenAI. High agency environment where your technical leadership directly shapes product outcomes. Hybrid flexibility with a culture of deep ownership and collaboration. Work accessories and a great work location Competitive compensation, medical coverage (self & family), and surprise performance perks. Fun team culture with regular hackathons, offsites, and creative engineering challenges.

Posted 3 days ago

Apply

2.0 - 7.0 years

5 - 8 Lacs

pune, chennai, mumbai (all areas)

Hybrid

Looking for Snowflake Developer. Design and implement data solutions using Snowflake cloud data platform. Location: PAN INDIA EXP- 2+yrs

Posted 3 days ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

hyderabad, chennai, bengaluru

Hybrid

Role- GCP EXP- 3+ yrs Location- PAN INDIA Technical and Professional Requirements:Technology->Cloud Platform->GCP Database->Google BigQuery Preferred Skills:Technology->Cloud Platform->GCP Data Analytics->Looker Technology->Cloud Platform->GCP Database->Google BigQuery

Posted 3 days ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

Hi This is Prashant a Senior Recruiter from Triunity Software Inc Please follow me on Linkedin:https://www.linkedin.com/in/usaprashantrathore/ Please Send me an email with Job Number (1 to 7) in Subject Line at Email: prashant@triunitysoft.com We’re Hiring Java Developers – Multiple Roles (1 TO 7) Location: Mumbai / Pune / Bengaluru :: Onsite Joining Period: Immediate joiners or within 30 days Open Roles We are hiring across multiple positions and experience levels: 1) Java Developer – Core Java, J2EE, Struts, Shell scripting (3–5 yrs) 2) Java Developer – Java, Spring Boot, JBoss, AWS, Linux (4–8 yrs) 3) Java Developer – Java, Python, Oracle, Databricks, AWS (6–10 yrs) 4) Java Developer – Java, Cloud Native (AWS, Azure, OCI, GCP) (10–12 yrs) 5) Java Developer – Full Stack (Java, Spring Boot, AWS) (4–8 yrs) 6) Java Developer – Full Stack (Java, Spring Boot, Python, Oracle, Databricks, AWS) (4–8 yrs) 7) Java Developer - Java & React.js (4–8 yrs) Thanks & Regards Prashant Rathore

Posted 3 days ago

Apply

1.0 years

0 Lacs

ramban, jammu & kashmir, india

On-site

Description Associate, QA (Mumbai Location) Syneos Health® is a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success. We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities. Our Clinical Development model brings the customer and the patient to the center of everything that we do. We are continuously looking for ways to simplify and streamline our work to not only make Syneos Health easier to work with, but to make us easier to work for. Whether you join us in a Functional Service Provider partnership or a Full-Service environment, you’ll collaborate with passionate problem solvers, innovating as a team to help our customers achieve their goals. We are agile and driven to accelerate the delivery of therapies, because we are passionate to change lives. Discover what our 29,000 employees, across 110 countries already know: WORK HERE MATTERS EVERYWHERE Why Syneos Health We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program. We are committed to our Total Self culture – where you can authentically be yourself. Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people. We are continuously building the company we all want to work for and our customers want to work with. Why? Because when we bring together diversity of thoughts, backgrounds, cultures, and perspectives – we’re able to create a place where everyone feels like they belong. Job Responsibilities JOB RESPONSIBILITIES General Profile Experience: 1 to 2 years in the Life Sciences industry Will be performing Quality Control (QC) of medical event data Excellent attention to detail and analytical skills Office-based role with mandatory visits to the client office in Mumbai Technical Skills: Proficient in Microsoft Excel Regulatory Knowledge: Familiar with ICH-GCP guidelines Strong understanding of medical terminologies Assists in the tracking of planned and scheduled audits in the enterprise quality management system Updates audit schedules and calendars as requested by management. Assists in gathering documentation required by auditors for audits and inspections (e.g. training records, organizational charts). Assists auditors in obtaining follow-up information during customer audits and regulatory inspections. Enters audit-related data into the enterprise quality management system. Maintains, files, and archives relevant Quality Assurance (QA) documentation. Supervision Normally receives detailed instructions on day-to-day work and detailed instructions on new assignments. Supervision Normally receives detailed instructions on day-to-day work and detailed instructions on new assignments. Get to know Syneos Health Over the past 5 years, we have worked with 94% of all Novel FDA Approved Drugs, 95% of EMA Authorized Products and over 200 Studies across 73,000 Sites and 675,000+ Trial patients. No matter what your role is, you’ll take the initiative and challenge the status quo with us in a highly competitive and ever-changing environment. Learn more about Syneos Health. http://www.syneoshealth.com Additional Information Tasks, duties, and responsibilities as listed in this job description are not exhaustive. The Company, at its sole discretion and with no prior notice, may assign other tasks, duties, and job responsibilities. Equivalent experience, skills, and/or education will also be considered so qualifications of incumbents may differ from those listed in the Job Description. The Company, at its sole discretion, will determine what constitutes as equivalent to the qualifications described above. Further, nothing contained herein should be construed to create an employment contract. Occasionally, required skills/experiences for jobs are expressed in brief terms. Any language contained herein is intended to fully comply with all obligations imposed by the legislation of each country in which it operates, including the implementation of the EU Equality Directive, in relation to the recruitment and employment of its employees. The Company is committed to compliance with the Americans with Disabilities Act, including the provision of reasonable accommodations, when appropriate, to assist employees or applicants to perform the essential functions of the job. Summary JOB SUMMARY Supports the auditing team in gathering documentation for audits and inspections and tracking the schedule of audits and inspections in the enterprise quality management system.

Posted 3 days ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

hyderabad

Work from Office

We seek a Senior Gen AI Engineer with strong ML fundamentals and data engineering expertise to lead scalable AI/LLM solutions. This role focuses on integrating AI models into production, optimizing machine learning workflows, and creating scalable AI-driven systems. You will design, fine-tune, and deploy models (e.g., LLMs, RAG architectures) while ensuring robust data pipelines and MLOps practices. Key Responsibilities Agentic AI & Workflow Design: Lead design and implementation of Agentic AI systems and multi-step AI workflows. Build AI orchestration systems using frameworks like LangGraph. Utilize Agents, Tools, and Chains for complex task automation. Implement Agent-to-Agent (A2A) communication and Model Connect Protocol (MCP) for inter-model interactions. Production MLOps & Deployment: Develop, train, and deploy ML models optimized for production. Implement CI/CD pipelines (GitHub), automated testing, and robust observability (monitoring, logging, tracing) for Gen AI solutions. Containerize models (Docker) and deploy on cloud (AWS / Azure/ GCP) using Kubernetes. Implement robust AI/LLM security measures and adhere to Responsible AI principles. AI Model Integration: Integrate LLMs and models from HuggingFace. Apply deep learning concepts with PyTorch or TensorFlow. Data & Prompt Engineering: Build scalable data pipelines for unstructured/text data. Design and implement embedding/chunking strategies for scalable data processing. Optimize storage/retrieval for embeddings (e.g., Pinecone, Weaviate). Utilize Prompt Engineering techniques to fine-tune AI model performance. Solution Development: Develop GenAI-driven Text-to-SQL solutions. Programming: Python. Foundation Model APIs: AzureOpenAI, OpenAI, Gemini, Anthropic, or AWS Bedrock. Agentic AI & LLM Frameworks: LangChain, LangGraph, A2A, MCP, Chains, Tools, Agents. Ability to design multi-agent systems, autonomous reasoning pipelines, and tool-calling capabilities for AI agents. MLOps/LLMOps: Docker , Kubernetes (K8s) , CI/CD , Automated Testing, Monitoring, Observability, Model Registries, Data Versioning. Cloud Platforms: AWS/Azure/GCP. Vector Databases: Pinecone, Weaviate, or similar leading platforms. Prompt Engineering. Security & Ethics: AI/LLM solution security, Responsible AI principles. Version Control: GitHub. Databases: SQL/NoSQL.

Posted 3 days ago

Apply

15.0 years

0 Lacs

thiruvananthapuram, kerala, india

On-site

Join SADA, An Insight Company as a Sr. Data Engineer! Your Mission As a Sr. Data Engineer at SADA, you will work collaboratively with architects and other engineers to recommend, prototype, build and debug data infrastructures on Google Cloud Platform (GCP). You will have an opportunity to work on real-world data problems facing our customers today. Engagements vary from being purely consultative to requiring heavy hands-on work, and cover a diverse array of domain areas, such as data migrations, data archival and disaster recovery, and big data analytics solutions requiring batch or streaming data pipelines, data lakes and data warehouses. You will be expected to run point on whole projects, end-to-end, and to mentor less experienced Data Engineers. You will be recognized as an expert within the team and will build a reputation with Google and our customers. You will demonstrate repeated delivery of project architectures and critical components that other engineers demur to you for lack of expertise. You will also participate in early-stage opportunity qualification calls, as well as lead client-facing technical discussions for established projects. Pathway to Success At SADA, and Insight company, we are in the business of change. We are focused on leading-edge technology that is ever-evolving. We embrace change enthusiastically and encourage agility. This means that not only do our engineers know that change is inevitable, but they embrace this change to continuously expand their skills, preparing for future customer needs. Your success starts by positively impacting the direction of a fast growing practice with vision and passion. You will be measured quarterly by the breadth, magnitude and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Expectations Required Travel - 30% travel to customer sites, conferences, and other related events. Customer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives. Job Requirements Required Credentials: Google Professional Data Engineer Certified or able to complete within the first 45 days of employment Required Qualifications: Mastery in at least one of the following domain areas: Big Data: managing Hadoop clusters (all included services), troubleshooting cluster operation issues, migrating Hadoop workloads, architecting solutions on Hadoop, experience with NoSQL data stores like Cassandra and HBase, building batch/streaming ETL pipelines with frameworks such as Spark, Spark Streaming and Apache Beam, and working with messaging systems like Pub/Sub, Kafka and RabbitMQ. Data warehouse modernization: building complete data warehouse solutions on BigQuery, including technical architectures, star/snowflake schema designs, query optimization, ETL/ELT pipelines and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive). Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for minimizing downtime. May involve conversion between relational and NoSQL data stores, or vice versa. Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale. Experience writing software in one or more languages such as Python, Java, Scala, or Go Experience building production-grade data solutions (relational and NoSQL) Experience with systems monitoring/alerting, capacity planning and performance tuning Experience in technical consulting or other customer facing role Useful Qualifications: Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc) Experience with IoT architectures and building real-time data streaming pipelines Experience operationalizing machine learning models on large datasets Demonstrated leadership and self-direction -- willingness to teach others and learn new techniques Demonstrated skills in selecting the right statistical tools given a data analysis problem About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work .

Posted 3 days ago

Apply

5.0 years

0 Lacs

thiruvananthapuram, kerala, india

On-site

Join SADA, An Insight company as a Senior Cloud platform Engineer ! Your Mission As a Senior Cloud Platform Engineer, you will work collaboratively to deliver high quality infrastructure projects on Google Cloud Platform for our cloud and hybrid-cloud customers. You will have an opportunity to work on real-world problems facing our customers in the field. Engagements vary from being purely consultative to requiring heavy hands-on work, and cover a range of domain areas, such as workload migrations from on-premises or other Clouds to GCP, standing up production ready infrastructure on GCP, building hybrid-cloud solutions leveraging infrastructure-as-code automation, and developing fully automated CI/CD pipelines. Moreover, you will be someone who will drive thought leadership, delivering best practice recommendations for internal and external solutions. You possess a diverse background in areas such as architecture design, distributed systems, infrastructure migration, and deployment strategies. To be successful, one must know how to navigate ambiguity, have extensive experience working with cloud technologies, have technical depth, and enjoy working with customers. In this role, you will: Be expected to tackle all technical challenges on whole projects and to mentor less experienced Cloud Infrastructure Engineers. You should be able to work independently, but should be a major participant in team reviews. Be recognized as having technical mastery within the practice with an established reputation with Google and our customers. Have demonstrable experience with public facing activities such as blogs, presentations, webinars, and OSS contributions. Ensure the best architecture and engineering approach is applied. You will be expected to repeatedly deliver complex projects, and will be the owner of the complete customer outcome, including complex technical components of the engagement. Participate in early-stage opportunity qualification calls, as well as lead client-facing technical discussions for established projects. Pathway to Success Our singular goal is to provide customers with the best possible experience in migrating, building, modernizing, and operationalizing their systems in Google Cloud Platform. Your success starts by positively impacting the direction of a dynamic practice with vision and passion. You will be measured quarterly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Expectations Required Travel: 10% travel to customer sites, conferences, and other related events. Customer Facing: You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touch points occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives. Onboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals. Details of the timeline are shared closer to the start date. Job Requirements Required Credentials Google Cloud Architect Certified or able to complete within the first 45 days of employment. A secondary Google Cloud certification in any other specialization. Expert or Professional level certifications in either or both AWS and Azure. Required Qualifications: 5+ years of relevant experience in deploying, migrating, configuring and managing applications on any of the public clouds. Technical mastery of networking, VPNs, compute infrastructure (servers, databases, firewalls, load balancers, etc), and architecting/developing/maintaining production-grade systems in virtualized environments. Applied experience migrating complex, multi-tiered workloads from on-prem to the cloud, including provisioning the target infrastructure and executing cutover plans that minimize system downtime. Applied experience delivering immutable infrastructure-as-code solutions using tools like Terraform, Ansible, Chef, Puppet, Salt, and Packer. Applied experience delivering continuous integration/continuous delivery pipelines, utilizing techniques like blue/green and canary deployments, with tools such as Jenkins, CircleCI, TravisCI, and Spinnaker. Working knowledge of systems monitoring, capacity planning, and performance tuning. Experience writing software in one or more languages such as Go and Python Useful Qualifications Knowledge and understanding of industry trends and new technologies and the ability to apply trends to architectural needs. Proven experience and understanding of architecture principles across infrastructure platforms, security, data, integration, and application layers. Experience working with containerization technologies (Kubernetes, Docker, etc) Experience being an administrator on a variety of Linux distributions. Experience with information security practices and procedures Strong working knowledge of VMware, KVM, Xen, Hyper-V, or other virtualization software About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work .

Posted 3 days ago

Apply

5.0 years

0 Lacs

thiruvananthapuram, kerala, india

On-site

Join SADA, An Insight company as a Senior Cloud platform Engineer ! Your Mission As a Senior Cloud Platform Engineer, you will work collaboratively to deliver high quality infrastructure projects on Google Cloud Platform for our cloud and hybrid-cloud customers. You will have an opportunity to work on real-world problems facing our customers in the field. Engagements vary from being purely consultative to requiring heavy hands-on work, and cover a range of domain areas, such as workload migrations from on-premises or other Clouds to GCP, standing up production ready infrastructure on GCP, building hybrid-cloud solutions leveraging infrastructure-as-code automation, and developing fully automated CI/CD pipelines. Moreover, you will be someone who will drive thought leadership, delivering best practice recommendations for internal and external solutions. You possess a diverse background in areas such as architecture design, distributed systems, infrastructure migration, and deployment strategies. To be successful, one must know how to navigate ambiguity, have extensive experience working with cloud technologies, have technical depth, and enjoy working with customers. In this role, you will: Be expected to tackle all technical challenges on whole projects and to mentor less experienced Cloud Infrastructure Engineers. You should be able to work independently, but should be a major participant in team reviews. Be recognized as having technical mastery within the practice with an established reputation with Google and our customers. Have demonstrable experience with public facing activities such as blogs, presentations, webinars, and OSS contributions. Ensure the best architecture and engineering approach is applied. You will be expected to repeatedly deliver complex projects, and will be the owner of the complete customer outcome, including complex technical components of the engagement. Participate in early-stage opportunity qualification calls, as well as lead client-facing technical discussions for established projects. Pathway to Success Our singular goal is to provide customers with the best possible experience in migrating, building, modernizing, and operationalizing their systems in Google Cloud Platform. Your success starts by positively impacting the direction of a dynamic practice with vision and passion. You will be measured quarterly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Expectations Required Travel: 10% travel to customer sites, conferences, and other related events. Customer Facing: You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touch points occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives. Onboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals. Details of the timeline are shared closer to the start date. Job Requirements Required Credentials Google Cloud Architect Certified or able to complete within the first 45 days of employment. A secondary Google Cloud certification in any other specialization. Expert or Professional level certifications in either or both AWS and Azure. Required Qualifications: 5+ years of relevant experience in deploying, migrating, configuring and managing applications on any of the public clouds. Technical mastery of networking, VPNs, compute infrastructure (servers, databases, firewalls, load balancers, etc), and architecting/developing/maintaining production-grade systems in virtualized environments. Applied experience migrating complex, multi-tiered workloads from on-prem to the cloud, including provisioning the target infrastructure and executing cutover plans that minimize system downtime. Applied experience delivering immutable infrastructure-as-code solutions using tools like Terraform, Ansible, Chef, Puppet, Salt, and Packer. Applied experience delivering continuous integration/continuous delivery pipelines, utilizing techniques like blue/green and canary deployments, with tools such as Jenkins, CircleCI, TravisCI, and Spinnaker. Working knowledge of systems monitoring, capacity planning, and performance tuning. Experience writing software in one or more languages such as Go and Python Useful Qualifications Knowledge and understanding of industry trends and new technologies and the ability to apply trends to architectural needs. Proven experience and understanding of architecture principles across infrastructure platforms, security, data, integration, and application layers. Experience working with containerization technologies (Kubernetes, Docker, etc) Experience being an administrator on a variety of Linux distributions. Experience with information security practices and procedures Strong working knowledge of VMware, KVM, Xen, Hyper-V, or other virtualization software About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work .

Posted 3 days ago

Apply

15.0 years

0 Lacs

thiruvananthapuram, kerala, india

On-site

Join SADA, An Insight Company as a Sr. Data Engineer! Your Mission As a Sr. Data Engineer at SADA, you will work collaboratively with architects and other engineers to recommend, prototype, build and debug data infrastructures on Google Cloud Platform (GCP). You will have an opportunity to work on real-world data problems facing our customers today. Engagements vary from being purely consultative to requiring heavy hands-on work, and cover a diverse array of domain areas, such as data migrations, data archival and disaster recovery, and big data analytics solutions requiring batch or streaming data pipelines, data lakes and data warehouses. You will be expected to run point on whole projects, end-to-end, and to mentor less experienced Data Engineers. You will be recognized as an expert within the team and will build a reputation with Google and our customers. You will demonstrate repeated delivery of project architectures and critical components that other engineers demur to you for lack of expertise. You will also participate in early-stage opportunity qualification calls, as well as lead client-facing technical discussions for established projects. Pathway to Success At SADA, and Insight company, we are in the business of change. We are focused on leading-edge technology that is ever-evolving. We embrace change enthusiastically and encourage agility. This means that not only do our engineers know that change is inevitable, but they embrace this change to continuously expand their skills, preparing for future customer needs. Your success starts by positively impacting the direction of a fast growing practice with vision and passion. You will be measured quarterly by the breadth, magnitude and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Expectations Required Travel - 30% travel to customer sites, conferences, and other related events. Customer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives. Job Requirements Required Credentials: Google Professional Data Engineer Certified or able to complete within the first 45 days of employment Required Qualifications: Mastery in at least one of the following domain areas: Big Data: managing Hadoop clusters (all included services), troubleshooting cluster operation issues, migrating Hadoop workloads, architecting solutions on Hadoop, experience with NoSQL data stores like Cassandra and HBase, building batch/streaming ETL pipelines with frameworks such as Spark, Spark Streaming and Apache Beam, and working with messaging systems like Pub/Sub, Kafka and RabbitMQ. Data warehouse modernization: building complete data warehouse solutions on BigQuery, including technical architectures, star/snowflake schema designs, query optimization, ETL/ELT pipelines and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive). Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for minimizing downtime. May involve conversion between relational and NoSQL data stores, or vice versa. Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale. Experience writing software in one or more languages such as Python, Java, Scala, or Go Experience building production-grade data solutions (relational and NoSQL) Experience with systems monitoring/alerting, capacity planning and performance tuning Experience in technical consulting or other customer facing role Useful Qualifications: Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc) Experience with IoT architectures and building real-time data streaming pipelines Experience operationalizing machine learning models on large datasets Demonstrated leadership and self-direction -- willingness to teach others and learn new techniques Demonstrated skills in selecting the right statistical tools given a data analysis problem About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work .

Posted 3 days ago

Apply

3.0 years

0 Lacs

pune, maharashtra, india

On-site

Particle41 is looking for a talented and versatile Python Developer to join our innovative team. The ideal candidate will have at least 3 years of experience in Python development and will be responsible for designing, developing, and maintaining scalable applications. You will work closely with cross-functional teams to deliver high-quality solutions. While the primary focus is on Python and its frameworks, familiarity with Node.js and TypeScript will be considered a valuable plus. In This Role, You Will: Software Development Build scalable, efficient, and secure backend applications primarily using Python and frameworks like Django, Flask, or FastAPI. Create RESTful APIs and services that are reliable, performant, and easy to integrate with other applications. Work with both relational and non-relational databases, writing optimized queries and ensuring data integrity and performance. Write clean, maintainable code following industry standards, conduct code reviews, and maintain comprehensive documentation. Requirements Gathering and Analysis Collaborate with designers, product managers, and other stakeholders to gather requirements and translate them into technical solutions. Participate in requirement analysis sessions to understand business needs and user requirements. Provide technical insights and recommendations during the requirements-gathering process. Agile Development Participate in Agile development processes, including sprint planning, daily stand-ups, and sprint reviews. Adapt to changing priorities and requirements in a fast-paced Agile environment. Continuous Learning and Innovation Propose innovative solutions to improve the performance, security, scalability, and maintainability of applications. Continuously seek opportunities to optimize and refactor existing codebase for better efficiency. Stay up to date with cloud platforms such as AWS, Azure, or Google Cloud Platform. Collaboration and Mentorship Mentor junior developers and provide technical guidance and support as needed. Collaborate effectively with cross-functional teams, including designers, testers, and product managers. Foster a collaborative and inclusive work environment where ideas are shared and valued. Skills and Experience We Value: Bachelor's degree in computer science, Engineering, or related field. Proven experience using Python and frameworks, with a minimum of 3 years of experience. Experience designing and building RESTful APIs, microservices, and scalable server-side solutions. Comfortable with relational (PostgreSQL, MySQL) and non-relational databases (MongoDB, Redis) including query optimization and data modeling. Familiarity with Git, GitHub/GitLab workflows, and understanding of continuous integration/deployment processes Good-to-have technical exposure: Node.js and TypeScript for full-stack collaboration or cross-functional projects. Front-end framework experience (React, Angular, Vue.js). Cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes). Excellent problem-solving and analytical skills, with a keen attention to detail. Adaptability and willingness to learn new technologies and tools as needed. About Particle41 Our core values of Empowering, Leadership, Innovation, Teamwork, and Excellence drive everything we do to achieve the ultimate outcomes for our clients. Empowering Leadership for Innovation in Teamwork with Excellence (ELITE) E - Empowering: Enabling individuals to reach their full potential. L - Leadership: Taking initiative and guiding each other toward success. I - Innovation: Embracing creativity and new ideas to stay ahead. T - Teamwork: Collaborating with empathy to achieve common goals. E - Excellence: Striving for the highest quality in everything we do. We seek team members who embody these values and are committed to contributing to our mission. Particle41 welcomes individuals from all backgrounds who are committed to our mission and values. We provide equal employment opportunities to all employees and applicants, ensuring that hiring and employment decisions are based on merit and qualifications without discrimination based on race, color, religion, caste, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, local, or international laws. This policy applies to all aspects of employment and hiring. We appreciate your interest and encourage applicants from these regions to apply. If you need any assistance during the application or interview process, please feel free to reach out to us at careers@Particle41.com.

Posted 3 days ago

Apply

4.0 years

0 Lacs

tamil nadu, india

Remote

Role Overview: We are seeking skilled Backend Developers to design, build, and maintain efficient, scalable, and secure server-side logic and services. The ideal candidate will have strong expertise in Python, Flask, and Google Cloud Platform (GCP), with experience building APIs, handling databases, and integrating cloud services in production environments. Required Experience: 4+ Years Location: Chennai, Open for remote for strong candidates Key Responsibilities: Collaborate with project teams to understand business requirements and develop efficient, high-quality code. Design and implement low-latency, high-availability, and performant applications using frameworks such as Flask, or FastAPI. Integrate multiple data sources and databases into a unified system while ensuring seamless data storage and third-party library/package integration. Create scalable and optimized database schemas to support complex business logic and manage large volumes of data. Conduct thorough testing using pytest and unittest, debugging applications to ensure they run smoothly. Required Skills & Qualifications: 3+ years of experience as a Python developer with strong communication skills. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth knowledge of Python frameworks such as Flask, or FastAPI. Strong expertise in cloud technologies, GCP preferred. Deep understanding of microservices architecture, multi-tenant architecture, and best practices in Python development. Familiarity with serverless architecture and frameworks like GCP Cloud Functions. Experience with deployment using Docker, Nginx, Gunicorn. Hands-on experience with SQL and NoSQL databases such as MySQL and Firebase. Proficiency with Object Relational Mappers (ORMs) like SQLAlchemy. Demonstrated ability to handle multiple API integrations and write modular, reusable code. Strong knowledge of user authentication and authorization mechanisms across multiple systems and environments. Familiarity with scalable application design principles and event-driven programming in Python. Solid experience in unit testing, debugging, and code optimization. Hands-on experience with modern software development methodologies, including Agile and Scrum. Experience with CI/CD pipelines and automation tools like Jenkins, GitLab CI, or CircleCI. Experience with version control system.

Posted 3 days ago

Apply

5.0 years

0 Lacs

india

Remote

About FullStack FullStack is the most transparent IT talent network, connecting highly skilled individuals with top global companies and Silicon Valley startups for remote, on-demand projects. We focus on building a trusted, high-performance network where talent can thrive in a positive, respectful, and supportive environment. By prioritizing transparency, fair opportunities, and professional growth, we empower our engineers to deliver their best work while collaborating with innovative teams worldwide. We’re Most Proud Of Offering life-changing opportunities for talented software professionals across the Americas. Enhancing highly-skilled software development teams of hundreds of the world’s greatest companies. Our 4.2-star rating on GlassDoor. Our client Net Promoter Score of 68, twice the industry average. The Opportunity Join our talent network and connect with U.S. clients for flexible, project-based development work as an AWS/Kubernetes DevOps Engineer. You will integrate directly into our client's team and work alongside their existing designers and engineers on a daily basis. What We're Looking For 5+ years of professional experience as a DevOps Engineer. Advanced English is required. Successful completion of a four-year college degree is required. Strong experience working with Linux systems and related tooling (kernel, shell, system libraries, file systems, client-server protocols). Experience with Linux container technologies (Docker, Kubernetes). Experience with public and private clouds: GCP or AWS ( preferable). Understanding of cloud orchestration frameworks (terraform, Kubernetes, etc) and their role in IT transformation. The ability to write code fluently in Python or Bash. Deep understanding of software development lifecycle including git-based CI/CD pipelines. Networking: experience with network theory and protocols, e.g. TCP/IP, UDP, DNS, HTTP, TLS, and load balancing. Familiarity with distributed message buses such as Kafka, Confluent. Ability to work through new and difficult issues and contribute to libraries as needed. Ability to create and maintain continuous integration and delivery of applications. Forensic attention to detail. A positive mindset and a can-do attitude. Experience working on Agile / Scrum teams. Meaningful experience working on large, complex systems. Ability to take extreme ownership over your work. Every day is a challenge to ensure you are performing to the expectations you and your team have agreed upon. Ability to identify with the goals of FullStack's clients, and dedicate yourself to delivering on the commitments you and your team make to them. Ability to consistently work 40 hours per week. What We Offer Competitive pay. 100% remote work. The ability to work with leading startups and Fortune 500 companies. Continuing education opportunities Opportunity to grow and expand your career Our talent network is committed to fostering a diverse, inclusive, and accessible environment where IT professionals can find opportunities that match their skills. We welcome individuals of all races, religions, genders, sexual orientations, national origins, abilities, and experiences. Our focus is on connecting talent with projects based on qualifications, ensuring a fair and transparent process that values diverse perspectives and global collaboration. Learn more about our Applicants Privacy Notice.

Posted 3 days ago

Apply

8.0 - 12.0 years

17 - 22 Lacs

hyderabad

Remote

Hi Aspirant, Greetings from Tech Blocks - IT Software - Product Engineering - Hitech City - Hyderabad !!!! About Us : Tech Blocks is a leading global digital product development firm. We unify strategies, design and technology with continuous growth-centric digital product engineering solutions for Fortune 500 companies and global brands We are looking for Java Backend Developer with Java 8 , Spring Boot , Microservices , Any cloud environment experience either on AWS / Azure / GCP Cloud , ReactJS or Angular JS Role : Senior Full Java Stack Developer. Mandatory Skills Required : Core java , Java 8 , Spring Boot , Micro Services , Web Services , Any Database experience required and along with Any one of the cloud experience either in AWS / GCP / Azure. Any one the experience of Frond end of React JS / Angular JS is mandatory Experience Level : Minimum 8+ years Job Opportunity : Full Time Mode of Work : Remote Note period : Immediate to 15 Days are only eligible (More than 15 days notice are not eligible) Job Role : As Senior Java Full Stack Developer - at Tech Blocks, you will be a key contributor to our backend services and microservices' design, development, and optimization. You will work with cutting-edge technologies, including Java, Spring Boot, SQL databases, and cloud platforms, to build scalable and reliable solutions that power our applications. Key Responsibilities Minimum of 8 years of hands-on experience in Java Full Stack Development. Design and develop Java-based backend services and Microservices using Spring Boot. Hands on Front End is mandatory either on React JS or Angular JS. Strong expertise in Java, Spring Boot, and microservices architecture. Proficiency in SQL database design, optimization, and querying. To deploy and manage services, utilize cloud platforms like AWS OR Azure OR GCP Develop and maintain SQL queries and database schema designs. Familiarity with API testing and debugging tools like Postman. Experience of cloud platforms such as GCP OR Azure OR AWS. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Note : Are you interested to apply for this job then request you to share updated resume to Kranthikt@tblocks.com / Kranthi -8522804902 Warm Regards, Kranthi Kumar| kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com

Posted 3 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

Remote

Who We Are: ( www.credera.com ) Credera, trading as TA Digital, is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com. We are part of the OPMG Group of Companies, a division of Omnicom Group Inc. Location: Hyderabad Work Mode: Hybrid (3 Days per week from Office) Shift timings: 5:30am to 2:30pm IST Key Responsibilities: Develop and maintain cross-platform mobile applications using React.JS/Native. Write clean, efficient, and maintainable code using Core JavaScript (Vanilla JS) and ES6+ features. Integrate with cloud services, preferably Google Cloud Platform (GCP) or AWS. Implement and maintain CI/CD pipelines for streamlined deployment. Follow and enforce coding best practices, including code reviews and unit testing. Troubleshoot and debug complex issues across the stack. Collaborate with cross-functional teams including designers, backend developers, and product managers. Required Skills & Qualifications: Strong proficiency in Core JavaScript / Vanilla JS and ES6+. Hands-on experience with React Native for mobile app development. Experience with cloud platforms (GCP preferred, AWS acceptable). Familiarity with CI/CD tools and pipelines. Solid understanding of software development best practices. Excellent debugging and problem-solving skills. Strong communication and collaboration abilities. Professional Attributes You Possess: Excellent communication Hybrid Work Model: Our employees have the flexibility to work remotely two days per week. We expect our team members to spend 3 days per week in person with the flexibility to choose the days and times that work best for both them and their project or internal teams. This could be at a Credera office or at the client site. You'll work closely with your project team to align on how you balance both the flexibility that we want to provide with the connection of being together to produce amazing results for our clients. The why: We are passionate about growing our people both personally and professionally. Our philosophy is that in-person engagement is critical for our ability to develop deep relationships with our clients and our team members – it's how we earn trust, learn from others, and ultimately become better consultants and professionals. Travel: Our goal is to keep out-of-market travel to a minimum and most projects do not require significant travel. While certain projects can require frequent travel (up to 80% for a period of time), our average travel percentage over a year for team members is typically between 10-30%. We try to take a personal approach to travel. You will submit your travel preferences which our staffing teams will consider when aligning you to a role. Credera will never ask for money up front and will not use apps such as Facebook Messenger, WhatsApp or Google Hangouts for communicating with you. You should be very wary of, and carefully scrutinize, any job opportunity that asks for money prior to starting and/or one where all communications take place exclusively via chat.

Posted 3 days ago

Apply

15.0 years

0 Lacs

trivandrum, kerala, india

On-site

Job Description Job Title: Senior DevOps Engineer Experience: 15+ years overall software industry experience; 5+ years hands-on DevOps (Docker, Kubernetes, ELK)Job Summary: We are seeking a highly experienced Senior DevOps Engineer to join our dynamic team. The ideal candidate has a broad and deep background in the software industry, with a proven track record in designing, implementing, and managing modern DevOps solutions, especially using Docker, Kubernetes, and the ELK stack. You’ll be responsible for architecting, automating, and optimizing our applications and infrastructure to drive continuous integration, continuous delivery, and high reliability.Key Responsibilities: Design, implement, and manage scalable, secure, and highly available container orchestration platforms using Docker and Kubernetes. Develop and manage CI/CD pipelines, version control systems, and automation frameworks. Deploy, configure, and maintain monitoring/logging solutions leveraging the ELK stack (Elasticsearch, Logstash, Kibana). Collaborate closely with development, QA, and operations teams to establish best practices for infrastructure as code, configuration management, and release engineering. Drive efforts toward system reliability, scalability, and performance optimization. Troubleshoot and resolve issues in development, test, and production environments. Mentor and guide junior team members; contribute to DevOps process improvements and strategy. Ensure security, compliance, and governance are adhered to in all DevOps operations. Participate in on-call rotations for production support, when necessary. Required Skills and Qualifications: 15+ years overall experience in the software industry, with strong exposure to large-scale enterprise environments. At least 5 years of hands-on experience with: Docker: containerization, image management, best practices Kubernetes: architecture, deployment, scaling, upgrades, troubleshooting ELK stack: design, deployment, maintenance, performance tuning, dashboard creation Extensive experience with CI/CD tools (Jenkins, GitLab CI/CD, etc.) Proficiency with one or more scripting/programming languages (Python, Bash, Go, etc.) Strong background in infrastructure automation and configuration management (Ansible, Terraform, etc.) Solid understanding of networking, load balancing, firewalls, and security best practices. Experience with public cloud platforms (OCI, AWS, Azure, or GCP) is a plus. Strong analytical, problem-solving, and communication skills. Preferred Qualifications: Relevant certifications (CKA, CKAD, AWS DevOps Engineer, etc.) Experience with microservices architecture and service mesh solutions. Exposure to application performance monitoring (APM) tools Responsibilities Job Title: Senior DevOps Engineer Experience: 15+ years overall software industry experience; 5+ years hands-on DevOps (Docker, Kubernetes, ELK)Job Summary: We are seeking a highly experienced Senior DevOps Engineer to join our dynamic team. The ideal candidate has a broad and deep background in the software industry, with a proven track record in designing, implementing, and managing modern DevOps solutions, especially using Docker, Kubernetes, and the ELK stack. You’ll be responsible for architecting, automating, and optimizing our applications and infrastructure to drive continuous integration, continuous delivery, and high reliability.Key Responsibilities: Design, implement, and manage scalable, secure, and highly available container orchestration platforms using Docker and Kubernetes. Develop and manage CI/CD pipelines, version control systems, and automation frameworks. Deploy, configure, and maintain monitoring/logging solutions leveraging the ELK stack (Elasticsearch, Logstash, Kibana). Collaborate closely with development, QA, and operations teams to establish best practices for infrastructure as code, configuration management, and release engineering. Drive efforts toward system reliability, scalability, and performance optimization. Troubleshoot and resolve issues in development, test, and production environments. Mentor and guide junior team members; contribute to DevOps process improvements and strategy. Ensure security, compliance, and governance are adhered to in all DevOps operations. Participate in on-call rotations for production support, when necessary. Required Skills and Qualifications: 15+ years overall experience in the software industry, with strong exposure to large-scale enterprise environments. At least 5 years of hands-on experience with: Docker: containerization, image management, best practices Kubernetes: architecture, deployment, scaling, upgrades, troubleshooting ELK stack: design, deployment, maintenance, performance tuning, dashboard creation Extensive experience with CI/CD tools (Jenkins, GitLab CI/CD, etc.) Proficiency with one or more scripting/programming languages (Python, Bash, Go, etc.) Strong background in infrastructure automation and configuration management (Ansible, Terraform, etc.) Solid understanding of networking, load balancing, firewalls, and security best practices. Experience with public cloud platforms (OCI, AWS, Azure, or GCP) is a plus. Strong analytical, problem-solving, and communication skills. Preferred Qualifications: Relevant certifications (CKA, CKAD, AWS DevOps Engineer, etc.) Experience with microservices architecture and service mesh solutions. Exposure to application performance monitoring (APM) tools Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 3 days ago

Apply

40.0 years

0 Lacs

trivandrum, kerala, india

On-site

Job Description At Oracle Cloud Infrastructure (OCI), we build the future of the cloud for Enterprises as a diverse team of fellow creators and inventors. We act with the speed and attitude of a start-up, with the scale and customer-focus of the leading enterprise software company in the world. Oracle Generative AI Service is an exciting team in Oracle Cloud Infrastructure. We are delivering innovative services at the intersection of artificial intelligence and cloud infrastructure. In Generative AI Service team, you will build and operate massive-scale cloud services leveraging state of art machine learning technologies. We are committed to providing the best in cloud products to meet the needs of our customers who are tackling some of the world’s most challenging problems. You will be part of a team of smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualized infrastructure. At every level, our engineers have a significant technical and business impact by designing and building innovative new systems to power our customer’s business critical applications. Your Opportunity: As a principal member of the software engineering division, you will take an active role in the design, development, deployment and operations of Oracle’s Generative AI offerings. What You’ll Do Develop Generative AI services and enterprise specific services around Generative AI with best-in-class security for customers. Design and develop scalable infrastructure, including microservices and backend for UI, dashboards and other interactive applications. Participate in cloud native development involving identity, logging, tagging, limits, service gateway, private network and other infrastructure concepts Work with data scientists/SW engineers to build out other parts of the infrastructure, effectively communicating your needs and understanding theirs and address external and internal shareholder's product challenges. Mentor and guide junior engineers on design, development, coding, testing and deployment Participate in on-call duty and production operations on rotation Qualifications Bachelor’s or master’s degree (preferred) in Computer Science, Computer Engineering, or related technical field. Expert in at least one high level language such as Java/C#/C++ (Java preferred) Expert in at least one scripting language such as Python, JavaScript, Shell (Python and JavaScript preferred) Working knowledge in at least one UI framework, e.g., react JS, angular etc Practical experience in design, implementation and production deployment of distributed systems using microservices architecture and API’s using common frameworks like Spring Boot (Java), Vertex.io etc Working knowledge of current techniques and approaches in machine learning and statistical or mathematical models Practical experience working in a cloud environment: Oracle Cloud (OCI), AWS, GCP, Azure, Heroku or similar technology. Experience or willingness to learn and work in Agile and iterative development and DevOps processes. Strong drive to learn and master new technologies and techniques. Deep understanding of data structures, algorithms, and excellent problem-solving skills. You enjoy a fast-paced work environment. Additional Preferred Qualifications Experience with Cloud Native Frameworks tools and products is a plus Hands-on experience or knowledge in Machine Learning Engineering, ML framework, ML Ops or Generative AI Responsibilities As a principal member of the software engineering division, you will take an active role in the design, development, deployment and operations of Oracle’s Generative AI offerings. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies