Jobs
Interviews

639 Netskope Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Kochi, Kerala, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Greater Bhopal Area

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Greater Bhopal Area

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Amritsar, Punjab, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Surat, Gujarat, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Spark, Kafka Netskope is Looking for: Sr. Software Engineer, IoT Security About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

30 - 60 Lacs

Kolkata, West Bengal, India

Remote

Experience : 5.00 + years Salary : INR 3000000-6000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

8.0 years

30 - 50 Lacs

Kolkata, West Bengal, India

Remote

Experience : 8.00 + years Salary : INR 3000000-5000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Spark, Kafka Netskope is Looking for: Sr. Software Engineer, IoT Security About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

8.0 years

30 - 50 Lacs

Cuttack, Odisha, India

Remote

Experience : 8.00 + years Salary : INR 3000000-5000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

30 - 60 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 5.00 + years Salary : INR 3000000-6000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Spark, Kafka Netskope is Looking for: Sr. Software Engineer, IoT Security About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

30 - 60 Lacs

Cuttack, Odisha, India

Remote

Experience : 5.00 + years Salary : INR 3000000-6000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

8.0 years

30 - 50 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 8.00 + years Salary : INR 3000000-5000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

30 - 60 Lacs

Guwahati, Assam, India

Remote

Experience : 5.00 + years Salary : INR 3000000-6000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Guwahati, Assam, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Spark, Kafka Netskope is Looking for: Sr. Software Engineer, IoT Security About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies