Jobs
Interviews

639 Netskope Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Patna, Bihar, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Guwahati, Assam, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Guwahati, Assam, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Amritsar, Punjab, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Greater Lucknow Area

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Greater Lucknow Area

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Nagpur, Maharashtra, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Nagpur, Maharashtra, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Kochi, Kerala, India

Remote

Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies