Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
14.0 years
50 - 80 Lacs
Surat, Gujarat, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Greater Lucknow Area
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Thane, Maharashtra, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
14.0 years
50 - 80 Lacs
Nashik, Maharashtra, India
Remote
Experience : 14.00 + years Salary : INR 5000000-8000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Python, Golang, AWS, Distributed Systems Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Incident Management Services operate within a distributed microservice architecture, handling a large volume of incidents generated by scanning agents such as Data Loss Prevention components. Our comprehensive suite of services is designed to streamline incident handling, facilitate forensic investigations, securely upload and download high-scale customer-sensitive data to customer-configured cloud endpoints for further investigations. It provides a workflow for admins to view incidents, forensics and take action on them. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to develop and enhance Incident Management capabilities. You will solve complex scale problems and deploy and manage the solution in production, including interactions with well known SaaS and IaaS Applications via their APIs at cloud scale. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Designing and building large-scale, highly-replicable cloud-based products and services Building REST APIs to provide provisioning services Coding in Golang and Python Using emerging technologies to build a highly performant, distributed and scalable system that can seamlessly interoperate with other enterprise elements. Designing data access layers to optimize the storage and retrieval of data. Working with Product Management to understand and define requirements Required skills and experience 12+ years of experience in designing and developing enterprise-grade software, with a strong focus on building scalable, high-performance cloud services using a microservices architecture Must have strong capabilities in driving architectural decisions, selecting the right technologies(database, messaging system, APIs, etc) and guiding the team through large scale architectural changes. Expertise in Incident Management System tools and workflows, specializing in building scalable solutions for incident tracking, automation and response. Strong experience in designing and optimizing Incident Management integrations and workflows is required. Exposure to Incident Management tools like SCIM, Splunk is preferred. Exposure to AWS is preferred In-depth knowledge and hands-on experience with Secure Vault(HSM), Ceph datastore, encryption standards are required. Excellent analytical / problem solving skills and a firm grasp of algorithms and data structures Expertise with Docker and Kubernetes, helm charts. Deep understanding of kubernetes and its debugging is a must. Experience with NoSQL databases (MongoDB, MariaDB, Cassandra etc) and messaging technologies such as Kafka Must have expertise in REST APIs and their application in SaaS, PaaS and IaaS Strong OO design and programming programming skills, and experience with Python and Golang. Experience with HTTPS and Web 2.0 programming required. Experience with SOAP, JSON and other web API frameworks Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 - 6.0 years
7 - 9 Lacs
Navi Mumbai
Work from Office
Overall 3-10 years’ experience in network security with at least 3 years in working on PROXY solutions Proficiency with management PROXY Experience in working with Windows, Linux, Unix environments
Posted 3 weeks ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Talent Acquisition & HR Specialist Location: New Delhi (Primarily In-Office, with some flexibility) About PJ Networks: PJ Networks Pvt Ltd is a leading IT System Integrator specializing in cybersecurity, network infrastructure, and data center solutions. We are trusted partners of Fortinet, Dell, Cisco, Netskope, HP, CrowdStrike, Trellix, eScan . As we scale rapidly, we are looking for an HR professional to lead recruitment and manage HR operations end to end . Role Summary: This role will drive Talent Acquisition for Sales, Technical, and Pre-Sales roles , and handle core HR processes including onboarding, documentation, compliance, and employee engagement. Key Responsibilities: Talent Acquisition (Priority) End-to-end recruitment for: Business Development Executives Inside Sales Pre-Sales Consultants Network & Security Engineers Drafting and posting job ads Screening resumes and conducting first-level interviews Scheduling and coordinating interview rounds Managing offer letters, negotiation, and joining formalities Onboarding & HR Operations Conducting induction and orientation Preparing and maintaining employee records Managing attendance and leave records Coordinating background verification and documentation Assisting with payroll inputs and statutory compliance (PF, ESI) Employee Engagement & Culture Organizing engagement activities, team events, and celebrations Handling employee queries and basic grievances Supporting performance management processes Requirements: Education: Graduate/Postgraduate in HR, Business Administration, or related field Experience: 2–5 years in Talent Acquisition and HR operations (experience in IT, System Integration, or Tech industries preferred) Strong understanding of sales and technical recruitment Good knowledge of HR policies, compliance, and documentation Excellent communication, coordination, and organizational skills
Posted 3 weeks ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Drive the evolution of SaaS security in a cloud-first world. As an Engineering Manager on our SPM team, you'll lead the charge in empowering organizations to confidently adopt SaaS applications, mitigating security threats through innovative solutions. You'll tackle the intricate challenge of mapping complex associations across diverse SaaS platforms (O365, Zoom, GitHub, Workday, and more) to uncover and neutralize potential vulnerabilities. Your leadership will be pivotal in delivering high-value features and continuously enhancing our cloud-scale SaaS Security Posture Management (SSPM) platform, collaborating closely with product, support, and customer success teams. What's In It For You As the Engineering Manager you will be driving teams responsible for designing, developing, testing and operating highly reliable platforms. Manage a high caliber globally distributed engineering team. Opportunity to collaborate with cross functional groups like product management, security researchers and UX to launch high impact features and functionalities. Use modern development, testing and deployment methodologies to deliver software services to our customers. What You Will Be Doing Provide hands-on leadership to drive cross functional scrum teams consisting of backend engineers, security researchers, QE and UI engineers. Develop long term engineering vision of the product areas owned. Review code, technical design and architecture on an ongoing basis. Maintain a high bar for engineering excellence and accountability. Hire, train and assess the performance of direct reports across different levels. Assist in the growth of employees through coaching, training and career development activities. Handling customer escalations, performing root cause analysis and driving customer issues to closure. Required Skills And Experience At least 12+ years of relevant software engineering experience. At least 2 yrs of experience managing high performing engineering teams. Experience in leading at least one customer facing cloud service end to end is mandatory. Prior experience of building and operating services in public clouds is mandatory, preferably AWS. Prior experience in developing / managing SaaS / IaaS connector development is preferable. Prior experience working in Product organizations and ability to collaborate with UX, Product Management and other engineering teams is preferable. Experience in participating in product grooming, scoping, planning and delivery of complex software products and features. Demonstrated ability to bootstrap teams for greenfield initiatives, set processes and guidelines (eg : shift left quality, CI/CD). Comfortable using frameworks like OKR, to measure the productivity of the team and hold them accountable. Prior experience in backend programming languages like Java / Golang is required. Demonstrable experience in guiding modern software engineering teams building microservices. Strong written and verbal communication skills, and the ability to communicate in an open, transparent , collaborative and in a consistent manner with other teams and your co-workers. Strong people skills with prior experience of hiring, coaching, retaining and performance management of engineers at different levels of experience. Prior experience of synthesizing data and presenting the work done by teams to the senior leadership team. Education BSCS or equivalent required, MSCS or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 3 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About the Company We are a forward-thinking technology company dedicated to delivering innovative solutions that empower businesses to thrive in the digital age. Our mission is to harness the power of cloud computing and AI to drive efficiency and productivity while fostering a culture of collaboration and continuous improvement. About the Role The role involves architecting, implementing, and managing secure, scalable, and highly available Azure cloud environments, while also integrating AI-based productivity solutions and driving automation across various platforms. Responsibilities Cloud Infrastructure & Operations Architect, implement, and manage secure, scalable, and highly available Azure cloud environments. Use Infrastructure-as-Code tools (Terraform, Bicep, ARM templates) for provisioning and configuration. Monitor usage, implement cost optimization strategies, and ensure governance and compliance. Manage identity using Entra ID (Azure AD) and integrate with other Microsoft and SaaS applications. AI & Copilot Studio Integration Design AI-based productivity solutions using Microsoft Copilot Studio Agent. Develop and deploy conversational agents integrated into Microsoft Teams, SharePoint, or Power Platform. Define responsible AI practices – model testing, monitoring, and governance. Build AI-enhanced business workflows to solve real-world problems. Power Platform Automation Build Power Apps for process digitization and user engagement. Design Power Automate workflows for routine tasks, data processing, and integrations. Develop custom connectors, APIs, and logic apps to extend platform functionality. DevOps, Automation & CI/CD Design and maintain CI/CD pipelines using Azure DevOps and GitHub Actions. Leverage Docker and Kubernetes (AKS) for microservices or containerized workloads. Automate deployment and monitoring of applications across environments. Use scripting (PowerShell, Python, Bash) and APIs for automation and integration. Security, Compliance & Governance Implement and enforce cloud security best practices (IAM, encryption, access controls). Support SASE (NetSkope), EDR (CrowdStrike), firewalls (FortiGate), and SecureW2 authentication. Collaboration, Innovation & Mentorship Collaborate with cross-functional teams to identify, design, and implement business solutions. Mentor junior engineers; share knowledge and lead internal enablement sessions. Drive adoption of AI tools and cloud solutions to maximize productivity and business impact. Contribute to the IT Knowledge Base and implement ITIL-based support processes. Qualifications Educational Background Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Required Skills Core Technologies Microsoft Azure (Compute, Networking, Storage, Security, Automation) Microsoft 365 (Entra ID, Intune, Defender for Cloud, Conditional Access) Microsoft Power Platform (Power Apps, Power Automate, Power BI, Power Virtual Agents) Microsoft Copilot Studio (agent design, Teams & SharePoint integrations) DevOps Tools (Azure DevOps, GitHub, Terraform, Ansible, Logic Apps) Security & Compliance (Defender, SASE, Fortinet, CrowdStrike, Valimail, SecureW2) Scripting & Automation (PowerShell, Python, Bash, API integrations) Preferred Skills Nice to Have Kubernetes / Docker / Azure Kubernetes Service (AKS) Printer Logic, Visual Studio Core, ConnectWise (RMM, PSA, Screenconnect), Site 24x7 Mac, Windows, Linux Server administration experience Autodesk Docs, Microsoft SQL Server, IIS Preferred Certifications Microsoft Certified: Azure Administrator Associate Microsoft Certified: Azure Solutions Architect Expert Microsoft Certified: Azure AI Engineer Associate ITIL v4 Foundation or higher (Preferred) Pay range and compensation package Compensation details will be discussed during the interview process. Equal Opportunity Statement We are committed to creating a diverse and inclusive environment for all employees. We encourage applications from individuals of all backgrounds and experiences. ```
Posted 3 weeks ago
6.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Chandigarh, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Chandigarh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Mysore, Karnataka, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Mysore, Karnataka, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
15.0 years
0 Lacs
Patna, Bihar, India
Remote
Experience : 15.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: CI/CD, RCA, Performance Engineering, AWS, Scaled Agile, Python, Grafana, Linux Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Application SRE Team supports several critical components of our foundational technologies for real-time protection, as well as our RBI and SSPM services. We are a team of software engineers focused on improving availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of the engineering stacks. If you are passionate about solving complex problems and developing cloud services at scale, we would like to speak with you. What’s In It For You You will be part of a high caliber engineering team in the exciting space of cloud tools and infrastructure management. You will have an opportunity to work on hybrid cloud (Google Cloud, On-prem cloud) and work with cutting edge tooling like spinnaker, kubernetes, docker and more. You will solve complex, exciting challenges and improve the depth and breadth of your technical and analytical skills Your contributions to our market-leading product support will significantly impact our rapidly-growing global customer base. What You Will Be Doing Partner closely with our development teams and product managers to architect and build features that are highly available, performant and secure Develop innovative ways to smartly measure, monitor & report application and infrastructure health Gain deep knowledge of our application stack Experience improving the performance of micro-services and solve scaling/performance issues Capacity management and planning Function well in a fast-paced and rapidly-changing environment Participate with the dev teams in a 24X7 on-call rotations. Ability to debug and optimize code and automate routine tasks. Drive efficiencies in systems and processes: capacity planning, configuration management, performance tuning, monitoring and root cause analysis. Required Skills And Experience 15+ years of experience troubleshooting Unix/Linux Experience in managing a large-scale web operations role Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby Experience with algorithms, data structures, complexity analysis, and software design Hands-on working with private or public cloud services in a highly available and scalable production environment. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible etc. Knowledge of distributed systems a big plus Previous experience working with geographically-distributed coworkers. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, developers, Product Managers, etc Should have led teams, collaborating cross-functionally to deliver complex software features and solutions. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough