Jobs
Interviews

1584 Cloud Platforms Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

As the Manager - Senior ML Engineer (Full Stack) at Firstsource Solutions Limited, your primary responsibility will be to lead the development and integration of Generative AI (GenAI) technologies while managing full-stack development projects. Your role will involve writing code modules, collaborating with cross-functional teams, and mentoring a team of developers and engineers to ensure project success. To excel in this position, you must possess a strong proficiency in Python programming, along with experience in data analysis and visualization libraries such as Pandas, NumPy, Matplotlib, and Seaborn. Your proven track record in machine learning and AI development, as well as familiarity with Generative AI (GenAI) technologies, will be essential. Additionally, your expertise in full-stack development, web development frameworks like Django or Flask, machine learning frameworks such as TensorFlow, Keras, PyTorch, or Scikit-learn, and cloud platforms like AWS, Azure, or Google Cloud will be highly valued. Effective communication skills and the ability to work in a fast-paced environment are crucial for success in this role. You will be expected to stay updated on the latest industry trends and technologies, ensuring compliance with software development best practices, security protocols, and data privacy regulations. Your qualifications should include a Bachelor's degree in computer science or a related field, along with a minimum of 7 years of experience in machine learning engineering or a similar role. If you are a driven professional with a passion for cutting-edge technologies and a desire to lead innovative projects in a dynamic environment, we encourage you to apply for this exciting opportunity at Firstsource Solutions Limited.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

west bengal

On-site

Are you passionate about building connected devices from the ground up Kalkey is on a mission to redefine the future of IoT, and we're looking for a dynamic IoT Engineer to help us get there. This is your chance to work on cutting-edge IoT hardware, owning the full development lifecycle from PCB design to cloud integration and scaled manufacturing. What You'll Do: Hardware Development & Design: - Create custom PCB layouts using Altium, KiCad, or Eagle - Source optimal sensors, microcontrollers, and components - Draft detailed schematics and maintain library databases - Optimize designs through Design for Manufacturability (DFM) practices Firmware & Communication Protocols: - Develop and test embedded firmware in C/C++ - Implement MQTT and secure IoT communication stacks - Configure WiFi, Bluetooth, LoRaWAN, and cellular connectivity - Apply encryption and cybersecurity best practices in firmware Manufacturing & Production Support: - Liaise with manufacturing partners from prototype to volume production - Build testing protocols and assist in debugging production issues - Write detailed assembly and manufacturing documentation System Integration & Testing: - Integrate hardware with cloud platforms (AWS, Azure, etc.) - Conduct system-level validation and performance tuning - Build automated test frameworks for quality assurance - Monitor field performance and continuously optimize devices What Were Looking For: Required: - B.E./B.Tech in Electrical/Electronics or related field - 3+ years in IoT hardware development and PCB design - Proficiency in Altium, KiCad, or Eagle - Embedded C/C++ for microcontrollers (e.g., ESP32, STM32) - Experience implementing MQTT and wireless protocols - Circuit design skills, analog and digital - Experience in sourcing and managing component supply chains Preferred: - M.E./M.Tech in a relevant discipline - Experience with AWS IoT, Google Cloud IoT, or Azure IoT - Familiarity with protocols like CoAP, HTTPS, WebSocket - Knowledge of compliance processes (FCC, CE, UL) - Enclosure and mechanical design experience - Power optimization and battery-based systems knowledge - Background in device-level cybersecurity What We Offer: - Competitive salary & performance-based incentives - Stock options in a fast-growing tech company - Work on groundbreaking IoT innovations - Training, mentorship, and skill-building resources - Flexible hours and remote work options - Inclusive and collaborative team culture,

Posted 1 week ago

Apply

10.0 - 12.0 years

35 - 50 Lacs

Chennai

Work from Office

Role Summary: We are seeking a seasoned professional to lead design, development, and optimization efforts within the Palo Alto Prisma suite, including Prisma Access and Prisma Cloud. This role involves working on cloud-native architectures, data-plane applications, and scalable infrastructure to support secure access and cloud operations. Key Responsibilities: Architecture & Development: Design and implement scalable software features for Prisma Access or Prisma Cloud. Lead development of data-plane applications and cloud-native services. Collaborate with cross-functional teams to integrate PanOS features into Prisma platforms. Performance & Optimization: Profile and tune systems software for efficient cloud operation. Optimize microservices and containerized workloads for performance and reliability. Collaboration & Leadership: Mentor junior engineers and contribute to team growth. Participate in design reviews and technical strategy discussions. Work closely with DevOps and support teams to troubleshoot and resolve complex issues. Testing & Automation: Build and automate performance testing scenarios. Ensure high reliability and quality through rigorous testing and validation. Required Qualifications: 9–13 years of experience in software engineering or cloud infrastructure. Strong programming skills in C/C++, Python, or Go. Deep understanding of operating systems (Linux/Unix), networking (TCP/IP, TLS), and cloud platforms. Experience with microservices, container orchestration (Kubernetes), and CI/CD pipelines. Proven track record of delivering enterprise-grade software solutions. Preferred Experience: Hands-on experience with Palo Alto Prisma Access or Prisma Cloud. Exposure to cloud providers (AWS, Azure, GCP). Familiarity with infrastructure-as-code tools (Terraform, Ansible). Strong debugging, profiling, and performance tuning skills.

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Kannur

Work from Office

Job Summary We are seeking a highly motivated and skilled DevOps Engineer to join our growing team You will play a crucial role in building, maintaining, and optimizing our infrastructure and CI/CD pipelines This role requires a strong understanding of Linux administration, web server technologies, database management, cloud platforms, and security best practices Experience with Windows/PowerBI is a significant advantage If you are passionate about automation, infrastructure as code, and continuous improvement, we encourage you to apply Responsibilities Infrastructure Management: Design, implement, and maintain scalable and reliable infrastructure on cloud platforms (AWS, Azure, GCP, DigitalOcean) and on-premises environments Web Server Administration: Manage and optimize web servers such as Nginx and Apache, including load balancing and reverse proxy configurations Database Administration: Set up, configure, and manage databases (MySQL, MongoDB, Redis, Postgres), including replication, backups, monitoring, and performance tuning CI/CD Pipeline Development: Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS pipelines Containerization: Work with Docker to containerize applications and manage containerized deployments Monitoring and Logging: Implement and maintain monitoring and logging systems using tools like Grafana, CheckMK, or ELK stack Security: Implement security best practices, including DDoS protection, WAF configuration (Cloudflare, AWS WAF), and secure environment deployments Cloud Networking: Understand and implement cloud networking concepts such as VPCs, subnets, security groups, NAT, and VPNs Collaboration: Work closely with development, QA, and operations teams to ensure smooth and efficient deployments Troubleshooting: Identify and resolve infrastructure and application issues Documentation: Maintain clear and concise documentation for infrastructure, deployments, and processes Required Skills Linux Administration: Strong proficiency in Linux administration Web Server Management: Experience with Nginx and Apache web servers, including load balancing and reverse proxy setup Database Management: Expertise in managing databases such as MySQL, MongoDB, Redis, and Postgres, including replication, backups, and monitoring CI/CD: Experience building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS pipelines Containerization: Strong hands-on experience with Docker Monitoring: Experience with monitoring tools like Grafana, CheckMK, or ELK stack Cloud Platforms: Familiarity with at least one cloud platform (AWS, Azure, GCP, DigitalOcean) Cloud Networking: Basic understanding of cloud networking concepts (VPCs, subnets, security groups, etc) Security Best Practices: Knowledge of security protocols and measures (DDoS protection, WAF, etc) Scripting: Proficiency in scripting languages (e-g , Bash, Python) is a plus Version Control: Experience with Git Preferred Skills Windows/PowerBI: Experience with Windows Server administration and PowerBI User and Permission Management: Experience managing user accounts and permissions

Posted 1 week ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Delivery Manager Data Engineering (Databricks & Snowflake) Position: Delivery Manager Data Engineering Location: Bavdhan/Baner, Pune Experience: 7-10 years Employment Type: Full-time Job Summary We are seeking a Delivery Manager Data Engineering to oversee multiple data engineering projects leveraging Databricks and Snowflake This role requires strong leadership skills to manage teams, ensure timely delivery, and drive best practices in cloud-based data platforms The ideal candidate will have deep expertise in data architecture, ETL processes, cloud data platforms, and stakeholder management Key Responsibilities Project & Delivery Management: Oversee the end-to-end delivery of multiple data engineering projects using Databricks and Snowflake Define project scope, timelines, milestones, and resource allocation to ensure smooth execution Identify and mitigate risks, ensuring that projects are delivered on time and within budget Establish agile methodologies (Scrum, Kanban) to drive efficient project execution Data Engineering & Architecture Oversight Provide technical direction on data pipeline architecture, data lakes, data warehousing, and ETL frameworks Ensure optimal performance, scalability, and security of data platforms Collaborate with data architects and engineers to design and implement best practices for data processing and analytics Stakeholder & Client Management Act as the primary point of contact for clients, senior management, and cross-functional teams Understand business requirements and translate them into technical solutions Provide regular status updates and manage client expectations effectively Team Leadership & People Management Lead, mentor, and develop data engineers, architects, and analysts working across projects Drive a culture of collaboration, accountability, and continuous learning Ensure proper resource planning and capacity management to balance workload effectively Technology & Process Improvement Stay up-to-date with emerging trends in Databricks, Snowflake, and cloud data technologies Continuously improve delivery frameworks, automation, and DevOps for data engineering Implement cost-optimization strategies for cloud-based data solutions Technical Expertise Required Skills & Experience: 10+ years of experience in data engineering and delivery management Strong expertise in Databricks, Snowflake, and cloud platforms (AWS, Azure, GCP) Hands-on experience in ETL, data modeling, and big data processing frameworks (Spark, Delta Lake, Apache Airflow, DBT) Understanding of data governance, security, and compliance standards (GDPR, CCPA, HIPAA, etc) Familiarity with SQL, Python, Scala, or Java for data transformation Project & Team Management Proven experience in managing multiple projects simultaneously Strong knowledge of Agile, Scrum, and DevOps practices Experience in budgeting, forecasting, and resource management Soft Skills & Leadership Excellent communication and stakeholder management skills Strong problem-solving and decision-making abilities Ability to motivate and lead cross-functional teams effectively Preferred Qualifications ???? Experience with data streaming (Kafka, Kinesis, or Pub/Sub) ???? Knowledge of ML & AI-driven data processing solutions ???? Certifications in Databricks, Snowflake, or cloud platforms (AWS/Azure/GCP) Apply or share your updated CV at hr@anvicybernetics,

Posted 1 week ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Kharar

Work from Office

We are looking for a skilled Sr. Java Developer with 5 to 8 years of experience to join our team at Wits Innovation Lab, contributing to the development of innovative software solutions. Roles and Responsibility Design, develop, and test high-quality Java-based applications using Spring Boot and other relevant frameworks. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale Java applications with complex data structures and algorithms. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with industry trends and emerging technologies to enhance skills and knowledge. Job Requirements Strong proficiency in Java programming language with expertise in Spring Boot. Experience with front-end technologies such as HTML, CSS, and JavaScript is desirable. Knowledge of database management systems like MySQL or PostgreSQL is preferred. Familiarity with cloud platforms like AWS or Azure is an added advantage. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.

Posted 1 week ago

Apply

9.0 - 12.0 years

5 - 5 Lacs

Bengaluru

Work from Office

Key Responsibilities: Define and lead automation strategy for enterprise infrastructure. Design and build automated solutions for provisioning, monitoring, and operational tasks. Collaborate with DevOps, Cloud, Security, and App teams to find and implement automation opportunities. Use Infrastructure as Code (IaC) tools like Terraform, Ansible, CloudFormation, or ARM templates. Automate across multi-cloud (AWS, Azure, GCP) and hybrid environments. Build and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Create workflows using tools like StackStorm, n8n, VMware, ServiceNow, and monitoring tools (Zabbix, Logic Monitor, SolarWinds, Icinga). Set standards for version control, change management, and compliance. Mentor and guide the automation engineering team. Review code for scalability, security, and performance. Stay updated with new tools and technologies in automation. Required Skills & Qualifications: 12+ years in IT infrastructure, with 7+ years in automation. Strong skills in Terraform, Ansible, PowerShell, Python, Bash, and YAML (including YAQL, JINJA). Solid knowledge of AWS, Azure, or GCP cloud platforms. Experience with CI/CD tools (Jenkins, GitLab, Azure DevOps). Hands-on with orchestration tools like StackStorm, n8n, or ServiceNow Orchestration. Knowledge of Docker and Kubernetes is a plus. Strong leadership, problem-solving, and communication skills. Key Skills: Python Scripting RESTful APIs Infrastructure as Code (IaC) Required Skills Python Scripts,Restfull Api,Infrastructure as Code

Posted 1 week ago

Apply

7.0 - 8.0 years

32 - 45 Lacs

Pune

Work from Office

We are looking to add an experienced and enthusiastic Lead Data Scientist to our Jet2 Data Science team in India. Reporting to the Data Science Delivery Manager , the Lead Data Scientist is a key appointment to the Data Science Team , with responsibility for executing the data science strategy and realising the benefits we can bring to the business by combining insights gained from multiple large data sources with the contextual understanding and experience of our colleagues across the business. In this exciting role, y ou will be joining an established team of 40+ Data Science professionals , based across our UK and India bases , who are using data science to understand, automate and optimise key manual business processes, inform our marketing strategy, and ass ess product development and revenue opportunities and optimise operational costs. As Lead Data Scientist, y ou will have strong experience in leading data science projects and creating machine learning models and be able t o confidently communicate with and enthuse key business stakeholders . Roles and Responsibilities A typical day in your role at Jet2TT: A lead data scientist would lead a team of data science team Lead will be responsible for delivering & managing day-to-day activities The successful candidate will be highly numerate with a statistical background , experienced in using R, Python or similar statistical analysis package Y ou will be expected to work with internal teams across the business , to identify and collaborate with stakeholders across the wider group. Leading and coaching a group of Data Scientists , y ou will plan and execute the use of machine learning and statistical modelling tools suited to the identified initiative delivery or discovery problem identified . You will have strong ability to analyse the create d algorithms and models to understand how changes in metrics in one area of the business could impact other areas, and be able to communicate those analyses to key business stakeholders. You will identify efficiencies in the use of data across its lifecycle, reducing data redundancy, structuring data to ensure efficient use of time , and ensuring retained data/information provides value to the organisation and remains in-line with legitimate business and/or regulatory requirements. Your ability to rise above group think and see beyond the here and now is matched only by your intellectual curiosity. Strong SQL skills and the ability to create clear data visualisations in tools such as Tableau or Power BI will be essential . They will also have experience in developing and deploying predictive models using machine learning frameworks and worked with big data technologies. As we aim to realise the benefits of cloud technologies, some familiarity with cloud platforms like AWS for data science and storage would be desirable. You will be skilled in gathering data from multiple sources and in multiple formats with knowledge of data warehouse design, logical and physical database design and challenges posed by data quality. Qualifications, Skills and Experience (Candidate Requirements): Experience in leading small to mid-size data science team Minimum 7 years of experience in the industry & 4+ experience in data science Experience in building & deploying machine learning algorithms & detail knowledge on applied statistics Good understanding of various data architecture RDBMS, Datawarehouse & Big Data Experience of working with regions such as US, UK, Europe or Australia is a plus Liaise with the Data Engineers, Technology Leaders & Business Stakeholder Working knowledge of Agile framework is good to have Demonstrates willingness to learn Mentoring, coaching team members Strong delivery performance, working on complex solutions in a fast-paced environment

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Project description We're seeking a strong and creative Software Engineer eager to solve challenging problems of scale and work on cutting edge technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture is one that strives on solving difficult problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. In this new adventure, you will have the opportunity to collaborate with a world-class team in the field of Insurance by building a holistic solution, interacting with multidisciplinary teams. Responsibilities As a Lead OpenTelemetry Developer, you will be responsible for developing and maintaining OpenTelemetry-based solutions. You will work on instrumentation, data collection, and observability tools to ensure seamless integration and monitoring of applications. This role involves writing documentation and promoting best practices around OpenTelemetry. Skills Must have Proven experience in supporting and managing Dynatrace solutions(SaaS/managed). Strong background in application performance monitoring and troubleshooting. Experience with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) is a plus. Servicenow integration experience. Experience on setting up of Dynatrace extensions. Ability to handle complex environment consisting of Hybrid Cloud for implementation both SaaS and OnPremise. Configure application monitoring, anomaly detection profiles creation, alert profile creation, synthetic monitoring, log monitoring, etc. Support in identification of Root Cause Analysis. Understanding of DQL Queries. Knowledge of industrialization tools such as Ansible. Nice to have -

Posted 1 week ago

Apply

15.0 - 20.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Talent Acquisition Advisor: Ramboini Swathi Job Code Level: DSP5 Refer Your Friends! Your Impact As a Principal Software Engineer you will utilize superior knowledge and experience to perform highly complex product architecture, design, systems analysis, research, maintenance, troubleshooting and other programming activities. You will also play a key role in the development of work teams by providing others with direction and leadership. You will also be involved in cross-team planning activities such as providing status updates and coordinating activities. What The Role Offers Produce high quality code according to design specifications. Detailed technical design of highly complex software components. Utilize superior analytical skills to troubleshoot and fix highly complex code defects. Propose creative solutions or alternatives balancing risk, complexity, and effort to meet requirements. Lead software design/code reviews to ensure quality and adherence to company standards. Lead and mentor other team members. Collaborate with Product Owner to plan and prioritize tasks for others to support the achievement of team objectives. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Provide status updates to stakeholders and escalates issues when necessary. Lead and/or participate in the software development process from design to release in an Agile Development Framework. What you will need to succeed Bachelors degree in Computer Science or related field 15+ years of enterprise product development experience Strong hands-on experience in design, development and deploying of applications on public cloud platforms (preferable GCP, but Azure/AWS is acceptable) Deep expertise in cloud native architecture including microservices, containerisation (Docker/Kubernetes), serverless computing and infrastructure as code (terraform) Experience in modernizing legacy systems and migrating workloads to public cloud Proficiency in CI/CD pipelines, cloud based DevOPS practices Git, Jenkins or similar Experience in building REST and SOAP APIs and responsive web applications (Angular/React) Good programming practices with solid object-oriented development experience Prior experience with C++ and system level programming is an added advantage A strong understanding of Application Architecture, High level and Low-level design, Middleware/Application Servers and Infrastructure. Good understanding of SQL and Database concepts. Experience liaising with groups of people across several geographies Excellent communication and time management skills

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

- Develop and automate complex workflows and systems using Python to streamline IT processes and infrastructure management. - Design, implement, and manage Ansible playbooks for infrastructure automation, configuration management, and continuous deployment. - Build and maintain scalable automation solutions within private cloud environments (Openstack) leveraging cloud-native tools and services. - Collaborate with cross-functional teams to integrate AIOps for proactive system monitoring, anomaly detection, and automated incident response. - Implement MLOps pipelines to automate machine learning model deployment, monitoring, and lifecycle management in production environments. - Optimize infrastructure and processes using automation frameworks to reduce operational overhead and improve system performance. - Automate routine tasks and processes related to system provisioning, configuration, and patch management. - Design self-healing, auto-scaling systems by incorporating advanced automation techniques in cloud platforms. - Create automated workflows to manage data pipelines, train models, and monitor ML models' performance in MLOps environments. - Collaborate with DevOps teams to build and maintain CI/CD pipelines and automated deployment processes for applications and machine learning models. - Continuously assess and improve automation frameworks and pipelines to align with the latest industry best practices. Profile required - Python: Strong proficiency in scripting and automation tasks. - Power shell scripting knowledge. - Ansible: Expertise in writing and managing Ansible playbooks for infrastructure automation. - Cloud Knowledge: Sound understanding of cloud platforms like AWS, Azure, or GCP, including serverless architectures and cloud-native automation tools. - AIOps: Experience with tools and frameworks that use AI to enhance IT operations (such as monitoring, event correlation, and incident management). - MLOps: Familiarity with automating the deployment, monitoring, and management of machine learning models. - DevOps CI/CD: Experience with CI/CD pipelines and infrastructure as code. - Good technical grasp of databases and systems - Problem Solving: Strong analytical skills to identify and resolve automation challenges effectively

Posted 1 week ago

Apply

6.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Your role We are hiring a Cloud FinOps Professional with 912 years of experience for our Bangalore location. The ideal candidate will drive cloud cost optimization, financial governance, and cross-functional collaboration. Strong expertise in cloud platforms, budgeting, and cost analysis is essential. Join us to shape efficient cloud financial strategies. Define, create, and update cloud cost optimization strategies and plans. Measure, improve, and communicate financial performance and savings outcomes. Ensure process adherence and prioritize tasks aligned with FinOps goals. Approve savings plans and facilitate collaboration across engineering and finance teams. Provide FinOps best practice guidance and support integration with tools like Cloudability . Implement automation strategies such as autoscaling, rightsizing, and cost alerts. Enable budget management features based on user roles and access levels. Maintain detailed documentation and reporting on cloud spend and optimization efforts. Your Profile Hands-on experience with AWS, Azure, or GCP cloud platforms. Expertise in cost optimization , financial analysis , and FinOps principles . Proven ability to perform rightsizing, budget reviews, and generate FinOps reports. Familiarity with FinOps frameworks and cloud financial governance. Strong communication, collaboration, and problem-solving skills. What you"ll love about working here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work oncutting-edge projectsin tech and engineering with industry leaders or createsolutionsto overcome societal and environmental challenges.

Posted 1 week ago

Apply

7.0 - 11.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Your role We are seeking an experienced and highly motivated Cloud Security Engineer to lead the implementation and optimization of security solutions across our public and hybrid cloud infrastructure. This role requires hands-on expertise in Microsoft Defender for Cloud, Cloud Access Security Broker (CASB), Cloud Workload Protection Platforms (CWPP), and Cloud Security Posture Management (CSPM) tools. The ideal candidate will be responsible for ensuring robust visibility, security, and compliance across all cloud-native assets, workloads, and applications. Design, deploy, and manage cloud-native security architectures across Azure, AWS, and GCP environments. Implement and optimize Microsoft Defender for Cloud, CASB solutions, and CWPP/CSPM tools to secure cloud workloads and assets. Monitor cloud environments for anomalies, vulnerabilities, and potential threats. Ensure compliance with regulatory standards (e.g., ISO, NIST, GDPR, HIPAA) and internal security policies. Conduct risk assessments and threat modeling of cloud services and applications. Collaborate with DevOps and Cloud Engineering teams to embed security into CI/CD pipelines. Develop automated security alerts, incident responses, and logging mechanisms. Provide recommendations for cloud architecture adjustments to strengthen security posture. Create and maintain documentation for cloud security strategies, policies, and procedures. Your profile Hands-on experience with Microsoft Defender for Cloud and CASB solutions (e.g., Microsoft Defender for Cloud Apps). Proven expertise with CWPP and CSPM platforms (e.g., Prisma Cloud, Wiz, Microsoft Defender CSPM). Strong understanding of cloud platforms Azure, AWS, GCP. Proficiency in scripting languages (e.g., PowerShell, Python) and infrastructure-as-code (e.g., Terraform, ARM templates). Knowledge of cloud security frameworks and best practices. Familiarity with SIEM solutions and cloud-native logging (e.g., Azure Monitor, AWS CloudWatch). Relevant certifications (e.g., Microsoft CertifiedAzure Security Engineer Associate, CISSP, CCSP, AWS Certified Security Specialty) are highly preferred. Excellent communication skills and stakeholder management experience.

Posted 1 week ago

Apply

15.0 - 20.0 years

20 - 25 Lacs

Noida

Work from Office

A Senior Tech & Delivery Lead is a leadership role focused on driving the successful delivery of technology projects and products. They are responsible for overseeing the entire delivery lifecycle, from planning and execution to ensuring quality and timely completion. This role requires technical expertise, leadership skills, and strong project management capabilities. Key Responsibilities: Strategic Planning: Translate business priorities into technical roadmaps and execution plans. Team Leadership: Manage and mentor technical teams, fostering collaboration and driving performance. Delivery Oversight: Ensure projects are delivered on time, within budget, and to the required quality standards. Technical Expertise: Provide guidance and support on technical solutions, architecture, and best practices. Stakeholder Management: Effectively communicate with stakeholders, manage expectations, and address concerns. Risk Management: Identify and mitigate potential risks and issues that could impact delivery. Agile Methodologies: Utilize and promote agile principles and practices within the team. Skills and Experience: Technical Proficiency: Strong understanding of software development lifecycle, architecture, and technologies. Project Management: Proven experience in planning, executing, and managing complex technology projects. Leadership Skills: Ability to motivate and guide teams, fostering a positive and productive environment. Communication Skills: Excellent written and verbal communication skills for effective stakeholder engagement. Problem-Solving: Ability to analyze complex technical issues and develop effective solutions. Agile Experience: Familiarity with agile methodologies and frameworks. Required Skills: Ability to obtain a security clearance Bachelors degree in computer science, Statistics, or a relevant field required 15+ years of experience with software development 10+ years of experience with building software using agile methods 15+ years of experience with multiple back-end languages (C#, Python) and JavaScript frameworks 15+ years of experience with build, deployment, and release automation and orchestration in a DevOps environment 15+ years of experience with infrastructure as code environments, including any activities around automated server or network configurations, large-scale software deployments, and monitoring and testing, such as CI/CD 15+ years of experience with automating tests for determining quality, security, performance, and usability of a system Experience with containerization technologies like Docker Experience writing and evaluating user stories, acceptance criteria, and pull-requests Demonstrate sharp, analytical, problem-solving, decision-making skills, attention to detail, and excellent communication skills Experience with .NET, ETL, Angular, NodeJs, and Azure Preferred Skills / Attributes: Experience with Infrastructure as Code technologies such as Ansible or Terraform Experience in developing in Cloud Solutions Experience with Looker, GCP Certifications in relevant technologies Experience with API development with C# and JavaScript frameworks Experience with Lean Design, Test Driven Development (TDD), and Behavior Driven Development (BDD) A passion to contribute to the full stack – the front-end, back-end and anything in-between (middleware or otherwise)

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Noida

Work from Office

Design, implement, and maintain data pipelines for processing large datasets, ensuring data availability, quality, and efficiency for machine learning model training and inference. Collaborate with data scientists to streamline the deployment of machine learning models, ensuring scalability, performance, and reliability in production environments. Develop and optimize ETL (Extract, Transform, Load) processes, ensuring data flow from various sources into structured data storage systems. Automate ML workflows using ML Ops tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended (TFX)). Ensure effective model monitoring, versioning, and logging to track performance and metrics in a production setting. Collaborate with cross-functional teams to improve data architectures and facilitate the continuous integration and deployment of ML models. Work on data storage solutions, including databases, data lakes, and cloud-based storage systems (e.g., AWS, GCP, Azure). Ensure data security, integrity, and compliance with data governance policies. Perform troubleshooting and root cause analysis on production-level machine learning systems. Skills: Glue, Pyspark, AWS Services, Strong in SQL; Nice to have : Redshift, Knowledge of SAS Dataset Mandatory Competencies DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker ETL - ETL - AWS Glue DevOps/Configuration Mgmt - Cloud Platforms - AWS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Database - Sql Server - SQL Packages

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 15 Lacs

Hyderabad

Work from Office

Description: We are looking for a visionary and hands-on DevOps Engineer to drive the strategic direction, implementation, and continuous improvement of our DevOps practices across the organization. Requirements: Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical discipline. 4 to 6 years of overall experience in infrastructure engineering, DevOps, systems administration, or platform engineering. Hands-on expertise in cloud platforms (AWS, Azure, or GCP), with deep knowledge of networking, IAM, VPCs, storage, and compute services. Strong proficiency in Infrastructure as Code (IaC) using Terraform, Ansible, or equivalent. Experience building and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, CircleCI, or ArgoCD. Strong background in Linux/Unix systems, system administration, scripting (e.g., Bash, Python, Go), and configuration management. Experience implementing containerization and orchestration using Docker, Kubernetes, Helm. Familiarity with observability tools and logging frameworks (e.g., ELK, Datadog, Fluentd, Prometheus, Grafana). Solid understanding of DevOps principles, Agile/Lean methodologies, and modern SDLC practices. Job Responsibilities: The ideal candidate is a technical leader with deep expertise in automation, cloud operations, configuration management, and infrastructure-as-code (IaC). This role requires strong collaboration across engineering, security, product, and QA to enable a culture of continuous delivery, operational excellence, and system reliability. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 week ago

Apply

16.0 - 21.0 years

27 - 30 Lacs

Pune

Work from Office

Standing at the critical junction between network and applications, our client is a leader in secure application services. Our solutions protect and optimize application performance in a world of many clouds. We strive to enable intelligent automation with deep machine learning, to ensure business critical applications are protected, reliable and always available. Are you up for the challenge? Come join INCEDO and make your impact in the future! As a Director Support, we look towards you as a leader for support operations in networking and network security product domain. Someone who has established support organization for network functions and SaaS operations for scalable and distributed application architectures. Technical Skills Networking Proficiency: Deep understanding of core networking protocols (TCP/IP, UDP, DNS, BGP, OSPF, etc.). Extensive experience in network design, implementation, and troubleshooting. Hands-on experience with network security products such as load balancers, firewalls, DDoS mitigation solutions, and related technologies. Strong understanding of network traffic analysis and packet capture tools. Cloud Platform and Virtualization Expertise: Solid understanding of cloud platforms (OCI, AWS, Azure) and their networking components (VPCs, subnets, security groups, etc.). Good understanding of Virtualized environment in (ESXi, XEN, Hyper-V). Strong understanding of Kubernetes and Docker for containerized form factor network implementations. Troubleshooting and Diagnostic Skills: Expert-level troubleshooting skills with network protocols and security issues. Ability to analyze complex network problems and develop effective solutions. Proficiency in using network monitoring and diagnostic tools. Nice-to-have skills Qualifications Minimum of 16+ years of experience in leading and managing technical support organizations, preferably in the network security or related product industry. Proven track record of success in building and scaling global support operations. Demonstrated ability to develop and implement support strategies that drive customer satisfaction and business results

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Hybrid

Must-Have Skills: 5 to 8 years of backend development experience Strong proficiency in Core Java and Scala Hands-on experience with Spring Framework Cloud technologies and microservices architecture Multithreaded programming and concurrency Scripting (e.g., Shell , Perl ) and UNIX/Linux Working knowledge of relational databases (Sybase, DB2) Excellent communication and problem-solving skills Data Bricks primary Good to Have: Experience with Apache Spark , Hadoop , or other big data tools Familiarity with FIX Protocol , MQ , XML/DTD Exposure to CI/CD tools and Test-Driven Development Domain understanding of Equity or Fixed Income trade lifecycle Strong debugging and performance profiling capabilities

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Gurugram

Hybrid

Key Responsibilities: Design, develop, and maintain RESTful APIs using Python (preferably with FastAPI, Flask, or Django REST Framework) Implement server-side logic, caching mechanisms, and background task processing Work with SQL and NoSQL databases like PostgreSQL, MySQL, MongoDB, etc. Integrate third-party APIs and services securely and efficiently Write clean, scalable, and testable code with proper unit testing and API documentation Collaborate with front-end developers, DevOps, and QA teams to ensure seamless system integration Optimize API performance, monitor errors, and ensure uptime/reliability Follow agile practices and participate in code reviews, sprint planning, and standups. Requirements: 8 to 15 years of hands-on backend development experience in Python Strong experience in API development and integration Solid understanding of object-oriented programming and design patterns Experience with FastAPI, Flask, or Django (REST Framework preferred) Proficiency with database design, ORMs (SQLAlchemy/Django ORM), and complex queries Knowledge of API security, OAuth2, JWT, and session management Familiarity with Docker, CI/CD, and cloud platforms (AWS/GCP/Azure) is a plus Experience with message brokers like RabbitMQ, Kafka, or Celery is a plus Strong debugging, performance tuning, and unit testing skills Preferred Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field Contributions to open-source projects or GitHub repositories is a bonus

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Qualcomm India Private Limited is seeking a highly skilled Software Engineer to join their Engineering Group. In this role, you will be responsible for designing and developing scalable internal tools using Python and advanced web frameworks such as Flask, RestAPI, and Django. Your key responsibilities will include deploying and maintaining these tools across enterprise environments to ensure reliability and performance. Additionally, you will integrate tools into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) to enhance engineering workflows, build interactive dashboards, and automate business processes using Power BI and Power Automate. Collaboration is a key aspect of this role as you will work cross-functionally to gather requirements and deliver tailored solutions. You will also be expected to maintain comprehensive documentation, provide user training and support, and work independently with minimal supervision, demonstrating a solution-oriented mindset to tackle challenges proactively. To be considered for this position, you must have a Bachelor's degree in Engineering, Information Systems, Computer Science, or a related field with at least 3 years of Software Engineering or related work experience. Alternatively, a Master's degree with 2+ years of experience or a PhD with 1+ year of experience will also be considered. You should have a minimum of 2 years of academic or work experience with programming languages such as C, C++, Java, Python, etc. The ideal candidate will have 6 to 9 years of experience and possess a strong proficiency in Python, automation, and backend development. Experience in AI-based tool development and leveraging AI for development activities is a must. Hands-on experience with frontend development frameworks like Flask, FastAPI, or Django, familiarity with CI/CD tools, and proficiency in Power BI and Power Automate are also required. Working knowledge of cloud platforms and containerization (Docker, Kubernetes) is essential, as well as a self-driven attitude with the ability to work independently and a strong problem-solving mindset. It would be beneficial to have a basic understanding of C/C++ with a willingness to learn advanced concepts to support engineering development as needed. Knowledge or experience in embedded software development is considered a plus. Qualcomm is an equal opportunity employer committed to providing accessible processes for individuals with disabilities. If you require accommodation during the application/hiring process, Qualcomm will provide reasonable accommodations upon request. Applicants are expected to abide by all applicable policies and procedures, including security and confidentiality requirements. Please note that our Careers Site is intended for individuals seeking a job at Qualcomm. Staffing and recruiting agencies are not authorized to use this site. Unsolicited resumes or applications will not be accepted. For more information about this role, please contact Qualcomm Careers.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Principal Engineer - Artificial Intelligence at Commvault, you will have the exciting opportunity to be part of an exceptional engineering team that thrives on innovative ideas. We are looking for individuals who are passionate about leveraging AI to enhance customer interactions, streamline processes, and drive overall business growth. Your responsibilities will include designing, developing, and deploying advanced AI models such as machine learning algorithms, natural language processing, and computer vision to tackle specific customer experience challenges. You will be collecting, cleaning, and analyzing large datasets to identify patterns, trends, and insights that will inform the development and optimization of AI models. Integration of AI solutions seamlessly into existing customer experience systems and platforms to ensure a cohesive and personalized customer journey will be a key aspect of your role. Continuous refinement and improvement of AI models to enhance their accuracy, efficiency, and effectiveness in delivering exceptional customer experiences will be a priority. You will also be involved in developing and testing AI-powered prototypes and Proof of Concepts (POCs) to demonstrate the feasibility and value of proposed solutions. Collaboration with cross-functional teams, including product managers, designers, and customer success representatives, to align AI initiatives with business objectives and customer needs is essential. To excel in this role, you should have a Bachelor's degree along with strong programming skills in Python or other relevant programming languages. Experience with popular AI frameworks and libraries, such as TensorFlow, PyTorch, and Scikit-learn, is required. In-depth knowledge of machine learning algorithms, natural language processing techniques, cloud platforms, AI services, CRM systems, and ethical AI practices is necessary. Experience with AI-powered chatbots or virtual assistants, as well as security and SaaS experience, is strongly preferred. At Commvault, we offer a supportive work environment with benefits such as an Employee Stock Purchase Plan (ESPP), continuous professional development, annual health check-ups, and more. We value inclusivity, professional growth, and work-life balance. If you are ready to #makeyourmark at Commvault and contribute to cutting-edge AI solutions, we encourage you to apply now and be a part of our dynamic team.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As an SDE-3 Node - IDSS at our company, you will be responsible for leading the development of scalable microservices for our platform. Your role will involve solving complex engineering challenges, optimizing system architecture, and collaborating with cross-functional teams to build innovative solutions that drive business growth. Your key responsibilities will include designing and building scalable microservices using Node.js for the IDSS platform. You will also lead code reviews and shape team processes to ensure high-quality, efficient development. Additionally, you will own the product lifecycle from requirements to production, making a direct impact on business outcomes. As a mentor, you will guide and support junior developers, driving innovation and technical excellence within the team. To be successful in this role, you should have at least 4 years of Node.js experience, with strong skills in microservices, RESTful APIs, and cloud platforms such as AWS or GCP. You should also possess expertise in Express, SQL/NoSQL/Redis databases, and message brokers like Kafka and RabbitMQ. A strong understanding of data structures, algorithms, and system design is essential. Furthermore, proven leadership skills in mentoring teams and driving successful project delivery are required. If you are passionate about leading the development of scalable microservices and enjoy tackling complex engineering challenges, we encourage you to apply for this position. Join our team at Pluang and be a part of driving business growth through innovative solutions.,

Posted 2 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

haryana

On-site

As you grow with us at Intellinet Systems, you will be part of a collaborative environment that fosters innovation, creativity, and community engagement. With a minimum of 8 years of experience and a graduate degree, we offer 5 vacancies at our Gurgaon, Haryana office with a Work From Office (WFO) policy. Your responsibilities will include designing and developing AI models and algorithms to address complex business challenges. You will apply machine learning and deep learning techniques to analyze large datasets, collaborating with cross-functional teams to align AI solutions with business requirements. Data preprocessing and cleaning are crucial to ensure model accuracy, followed by training and optimizing AI models using various algorithms and frameworks. Monitoring model performance, staying updated on AI advancements, providing technical guidance to junior team members, and documenting project progress are key aspects of this role. Additionally, identifying new AI opportunities and proposing innovative solutions will be part of your regular tasks. To excel in this role, you should hold a Bachelor's or Master's degree in computer science, engineering, or a related field, along with 2-3 years of industry experience in developing AI solutions. Proficiency in Python programming and relevant libraries such as TensorFlow, PyTorch, or scikit-learn is essential. A deep understanding of machine learning algorithms, statistical analysis, data preprocessing techniques, and experience working with large datasets are required. Knowledge of data visualization, cloud platforms for AI deployment (e.g., AWS, Azure, Google Cloud), software development practices, and version control systems is advantageous. Problem-solving skills, creativity, effective communication, and teamwork abilities are highly valued in this role. If you meet these requirements and are excited to contribute to cutting-edge AI solutions, please send your updated CV to hr@intellinetsystem.com with the job code in the subject line.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Python API Developer specializing in Product Development, you will leverage your 4+ years of experience to design, develop, and maintain high-performance, scalable APIs that drive our Generative AI products. Your role will involve close collaboration with data scientists, machine learning engineers, and product teams to seamlessly integrate Generative AI models (e.g., GPT, GANs, DALL-E) into production-ready applications. Your expertise in backend development, Python programming, and API design will be crucial in ensuring the successful deployment and execution of AI-driven features. You should hold a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Your professional experience should demonstrate hands-on involvement in designing and developing APIs, particularly with Generative AI models or machine learning models in a production environment. Proficiency in cloud-based infrastructures (AWS, Google Cloud, Azure) and services for deploying backend systems and AI models is essential. Additionally, you should have a strong background in working with backend frameworks and languages like Python, Django, Flask, or FastAPI. Your core technical skills include expertise in Python for backend development using frameworks such as Flask, Django, or FastAPI. You should possess a strong understanding of building and consuming RESTful APIs or GraphQL APIs, along with experience in designing and implementing API architectures. Familiarity with database management systems (SQL/NoSQL) like PostgreSQL, MySQL, MongoDB, Redis, and knowledge of cloud infrastructure (e.g., AWS, Google Cloud, Azure) are required. Experience with CI/CD pipelines, version control tools like Git, and Agile development methodologies is crucial for automating deployments and ensuring efficient backend operations. Key responsibilities will involve closely collaborating with AI/ML engineers to integrate Generative AI models into backend services, handling data pipelines for real-time or batch processing, and engaging in design discussions to ensure technical feasibility and scalability of features. Implementing caching mechanisms, rate-limiting, and queueing systems to manage AI-related API requests, as well as ensuring backend services can handle high concurrency during resource-intensive generative AI processes, will be essential. Your problem-solving skills, excellent communication abilities for interacting with cross-functional teams, and adaptability to stay updated on the latest technologies and trends in generative AI will be critical for success in this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should possess over 3 years of professional software development experience, specifically in a Java architect role. Your expertise should include Java, J2EE, Spring Framework (such as Spring Boot, Spring Cloud), and RESTful APIs. Experience in microservices architecture and containerization utilizing Docker and Kubernetes is essential. A solid understanding of design patterns, SOA, and architectural best practices is required. Additionally, familiarity with front-end technologies like Angular, React, or Vue.js would be advantageous. Proficiency in CI/CD pipelines, DevOps practices, and cloud platforms such as AWS, Azure, or GCP is preferred. Strong problem-solving, communication, and leadership skills are expected in this role. Your responsibilities will include defining and documenting high-level software architecture and design for enterprise Java applications. Providing technical leadership and mentorship to development teams, evaluating and selecting appropriate technologies and frameworks, and designing and implementing scalable, high-performance solutions using Java, Spring Boot, and related technologies. Key Skills: - Java Technical Architect - Java - Spring Boot - Microservices Role: Java Architect Industry Type: IT/ Computers - Software Functional Area: IT-Software Required Education: Bachelor's Degree Employment Type: Full Time, Permanent Please note the following additional information: Job Code: GO/JC/157/2025 Recruiter Name: Kathiravan G,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies