Jobs
Interviews

7926 Tcp Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you ready to experiment, learn, and implement in the field of ML, Python, Computer Vision, hardware platforms like Jetson Nano and Raspberry Pi, and cloud services Join us on a new adventure where your expertise can revolutionize the dynamics of our organization. We believe in selection, not rejection, and we are excited to welcome you to our team. OptiSol is your destination for a stress-free and balanced lifestyle. We provide a nurturing environment where your career can flourish. As a certified GREAT PLACE TO WORK for 4 consecutive years, we value open communication, accessible leadership, diversity, and work-life balance with flexible policies. At OptiSol, you can thrive both personally and professionally. We are at the forefront of AI and innovation, shaping the future together. Join us on this journey of learning and growth. What we like to see in you: - Core Competencies: Programming, AI Expertise, IoT Hardware, Protocols (RTSP, TCP, MQTT, Modbus, UART), Leadership - Bachelor's/masters in computer science, ECE, or related fields - Expertise in Python and relevant ML frameworks (TensorFlow, PyTorch, OpenCV) - Strong understanding of neural networks, transfer learning, and optimization techniques - Proficiency with Jetson Nano, Raspberry Pi, Arduino, and related platforms - Familiarity with RTSP, TCP, MQTT, Modbus, UART - Proven experience in leading technical teams and managing projects What do we expect: - Hands-on experience with model quantization, multitask learning, and zero-shot fine-tuning - Familiarity with OCR and multimodal LLMs for different data types - Experience in API development using FastAPI and Django, plus Android AI apps - Knowledge of industrial cameras, motor drivers, and Ethernet switches - Proficiency in CUDA programming and GPU optimization What You'll Bring to the Table: - Lead a team of AI engineers and promote a creative and collaborative environment - Work with stakeholders to define project goals and milestones - Deliver high-quality AI-driven solutions on time - Tweak machine learning and computer vision algorithms for tasks like object detection - Deploy AI on edge devices like Jetson Nano or Raspberry Pi - Integrate AI models with electronics and solve hardware-software challenges - Manage AI services on cloud platforms (AWS, Azure, Google Cloud) - Test, validate, and document solutions following industry best practices Core benefits you'll gain: - Lead and inspire a team of engineers in a collaborative setting - Directly shape project goals and ensure project success - Gain hands-on experience in cutting-edge AI and computer vision applications - Dive into edge devices like Jetson Nano and Raspberry Pi - Manage AI services on cloud platforms securely and efficiently - Develop a well-rounded skill set in AI and hardware Join us as an OptiSolite and discover a fulfilling career with us. Explore life at OptiSol and learn more about our culture on our Insta Page.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Requirement : Senior SONiC Engineer (DEVELOPER) Preferred Qualification Hands-on experience with Dockers Knowledge of SONiC SAI for development of new features and integration Hands-on experience with Redis-DB Strong knowledge and hands-on experience network ASIC architecture and SDK Hands-on experience with open source Layer 2 protocols (teamd, STP) and L3 (FRR - BGP, OSPF) networking protocols Architectural knowledge of data center design (Leaf/Spine, CLOS) and distributed systems Ability to take a project from scoping to actual delivery meeting customer requirements Excellent written and communication skills with an ability to influence peers and customers Basic Qualification 5+ Years of Software development including Programming experience with C/C++/Python/go Experience in Design, development and testing of Data Center Networking infrastructure and protocols Good understanding of data structures, algorithms and computer science fundamentals Hands on experience with linux TCP/IP networking, Netlink (ref:hirist.tech)

Posted 1 week ago

Apply

7.0 years

0 Lacs

Delhi, India

On-site

Job Description Report this job Job Title: Infrastructure Trainer (Linux, Windows, Networking, Cloud, DevOps, Docker-Kubernets, open shift) Location: Delhi NCR Employment Type: Full-time / Part-time / Contract Experience Required: 3–7 years in IT infrastructure and training Job Summary We are seeking an experienced and dynamic Infrastructure Trainer to deliver training programs in IT infrastructure technologies including Linux, Windows Server, Networking, Cloud Computing (AWS/Azure/GCP), and DevOps tools. The trainer will design and deliver hands-on sessions aimed at enhancing learners’ technical knowledge and practical implementation skills aligned with industry requirements. Key Responsibilities Conduct instructor-led training sessions (ILT/Virtual ILT) in: Linux Administration (RHEL, Ubuntu) Windows Server Administration (AD, DNS, DHCP, Group Policies) Networking Basics (CCNA level: TCP/IP, VLAN, Routing, Switching) Cloud Platforms (AWS, Azure, GCP – basic to associate level) DevOps Tools (Git, Jenkins, Docker, Kubernetes, Ansible, Terraform) Develop and update training materials, hands-on labs, and real-world simulation exercises. Customize training based on learner profiles (students, working professionals, freshers). Provide guidance on certifications (RHCSA, CCNA, AWS, Azure, etc.). Assess and track student progress and performance. Mentor students on projects and job interview preparation. Stay updated with industry trends and continuously enhance course content. Preferred Certified Experience with online/Offline training platforms (e.g., Zoom, Google Meet, LMS). Experience in preparing learners for certifications or placements. Experience working in EdTech or academic institutions. Compensation: 30-50K per ponth Reporting To: CTO, Learning Director Key Details Job Function: IT Software : Software Products & Services Industry: IT-Hardware/Networking, IT-Software Specialization:Application Programming Qualification: Any Graduate Employment Type: Full Time Key Skills Infrastructure Trainer Linux Windows Networking Cloud DevOps Docker -Kubernets open shift Job Posted by Company Innoworq infotech Pvt Ltd INNOWORQ is an IT Infrastructure and Solutions Company that enables enterprises across industries wi... Moreth a Pan India presence and providing services in many countries outside India. As an IT services and transformation partner, INNOWORQ brings extensive domain and technology expertise to drive competitive differentiation with measurable business outcomes. Since our inception 2019 INNOWORQ has lead a customer-centric approach and utilizes all delivery models, be it in supporting the infrastructure or their integration or deployment. Less Job Id: 71624943

Posted 1 week ago

Apply

0.0 - 31.0 years

1 - 2 Lacs

Delhi Cantonment, New Delhi

On-site

Broadband Internet Technician - Fiber Optic (Delhi Cantonment) Company: Fiberblaze Private Limited Location: Delhi Cantonment, Delhi, India Job Type: Full-time Job Summary: We are seeking a skilled and dedicated Broadband Internet Technician to join our team in Delhi Cantonment. The ideal candidate will be responsible for the installation, configuration, and maintenance of our fiber-based broadband connections for residential and commercial customers. This role requires a strong technical aptitude, excellent problem-solving skills, and a commitment to providing outstanding customer service. Key Responsibilities: Installation and Activation: Perform new fiber-to-the-home (FTTH) and fiber-to-the-business (FTTB) installations. Run fiber optic cables from the distribution point to the customer's premises. Install and configure customer premises equipment (CPE), including Optical Network Terminals (ONTs), routers, and Wi-Fi access points. Ensure proper signal strength and network connectivity. Educate customers on how to use their new services and equipment. Maintenance and Troubleshooting: Diagnose and resolve technical issues related to fiber optic connections, including signal loss, connectivity problems, and equipment failures. Perform repairs, replacements, and upgrades of network components and customer equipment. Conduct routine maintenance checks to ensure network stability and performance. Respond to service calls and tickets in a timely and professional manner. Network and Infrastructure Management: Splice and terminate fiber optic cables using fusion splicers and other specialized tools. Document all work performed, including installations, repairs, and network configurations. Adhere to all safety protocols and company standards for network installation and maintenance. Collaborate with the network operations center (NOC) to address wider network issues. Customer Service: Interact with customers in a courteous and professional manner, ensuring a positive experience. Provide clear and concise explanations of technical issues and solutions. Maintain a clean and organized work environment, both at the customer's location and in the company vehicle. Specific Requirements: Technical Skills: Proven experience with fiber optic cable installation, splicing, and termination. Hands-on experience with fusion splicers, OTDRs (Optical Time-Domain Reflectometers), and power meters. Knowledge of IP networking, including TCP/IP, DNS, and DHCP. Familiarity with Wi-Fi technologies (2.4GHz, 5GHz, Mesh) and router configuration. Ability to read and interpret network diagrams and schematics. Qualifications and Experience: Diploma or ITI certification in Electronics, Telecommunications, or a related field is preferred. Minimum of 1-3 years of experience as a broadband or telecom technician, with a focus on fiber optic technology. Prior experience in a customer-facing technical role is a plus. Other Requirements: Valid driver's license and a clean driving record. Willingness to work flexible hours, including weekends and on-call shifts, as required. Excellent communication skills in Hindi and English. Ability to work independently and as part of a team. Physical ability to climb ladders, work in confined spaces, and lift heavy equipment. To Apply: Please send your resume and a brief cover letter outlining your relevant experience to [jainiti.prasad@gmail.com] or apply through our WhatsApp [9555333948].

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor’s degree in Computer Science, a related field, or equivalent practical experience. 2 years of experience with programming in one or more programming languages. 2 years of experience working with administration (e.g. filesystems, inodes, system calls) or networking (e.g. TCP/IP, routing, network topologies and hardware, SDN). Experience in navigating enterprise software, deployment and management of workloads on Cloud. Experience in Unix/Linux systems, IP networking, performance and application issues. Preferred qualifications: Master's degree in Computer Science or Engineering. Experience in an engineering or operations role in Enterprise Applications or other large-scale enterprise space. Experience in problem-solving and analyzing complex enterprise systems. Ability to work with multiple global stakeholders. About The Job Site Reliability Engineering (SRE) combines software and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems. SRE ensures that Google's services—both our internally critical and our externally-visible systems—have reliability, uptime appropriate to users' needs and a fast rate of improvement. Additionally SRE’s will keep an ever-watchful eye on our systems capacity and performance. Much of our software development focuses on optimizing existing systems, building infrastructure and eliminating work through automation. On the SRE team, you’ll have the opportunity to manage the complex challenges of scale which are unique to Google, while using your expertise in coding, algorithms, complexity analysis and large-scale system design. SRE's culture of intellectual curiosity, problem solving and openness is key to its success. Our organization brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to create an environment that provides the support and mentorship needed to learn and grow. To learn more: check out our books on Site Reliability Engineering or read a career profile about why a Software Engineer chose to join SRE. Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers' engineers and love voiding warranties by taking things apart so we can rebuild them. We keep our networks up and running, ensuring our users have the best and fastest experience possible. Responsibilities Design, code and execute on projects to improve the reliability posture of critical enterprise applications. Participate in the team's on-call rotation. Drive technical interactions with business partners to come up with innovative ideas in terms of improving reliability for enterprise applications. Reduce the operational work significantly for our footprint on Google Cloud Platform and Google stack. Deliver Impact by helping the team focus and choose impactful projects, and deliver to completion. Collaborate with the Sibling SRE team to support Corporate Engineering services. Build relationships with our business partners. Work with other engineering teams to ensure that our infrastructure is reliable, scalable, and secure. Scale systems sustainably through mechanisms like automation and evolve systems by driving changes that improve reliability and velocity. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JR0126096 Associate, Technology Operations – Pune, India Want to work on global strategic initiatives with a FinTech company that is poised to revolutionize the industry? Join the team and help shape our company’s digital capabilities and revolutionize an industry! Join Western Union as an Associate, Technology Operations. Western Union powers your pursuit. The Associate, Technology Operations will own end-to-end governance for solution and services delivery of Middleware Operations and Engineering for systems and applications portfolio: On-premises and on the Cloud and using insight from customers and colleagues worldwide to improve financial services for families, small businesses, multinational corporations, and non-profit organizations. Role Responsibilities To maintain and manage different Middleware Technologies such as Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP on On-prem or AWS Cloud infrastructure. Design and implement Highly Available DR solutions. Collaborate with architecture, engineering, support, teams in designing, and deploying various application solutions. Ability to handle multiple projects and deadlines in a fast-paced environment independently. Advanced troubleshooting skills: Application performance tuning, issue resolutions. Good communication and Interpersonal skills, with the ability to collaborate effectively with cross functional teams. Good delivery exposure starting from configuration, development and deployment Experience with Agile/Scrum Technologies. Design and execute upgrades and migrations including OnPrem and in Cloud. Define and manage the best practices around Application security and help to ensure security and compliance across all application systems. Providing on-call support for production systems. Continuous improvement and automation as much as possible. Communicate clearly and regularly with project teams and management. Mentor team and build cloud knowledge within the team and drive for team success. Role Requirements 3+ years in experience in working on different middleware technologies (Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP) Knowledge of Hawk rules, Grafana Prometheus. Familiarity with IT Service Management tools like ServiceNow. Experience with Splunk AppD, Zenoss, AWS CloudWatch, CICD tools. Strong Windows, AIX/Unix/Linus administration skills. Expert level JVM dump reading, end to end trouble shooting skills. Experience in using cloud native technologies to build applications. Strong understanding of Serverless Computing. DevOps exposure and knowledge of one or more tools such as Chef, Puppet, Jenkins, Ansible, Python. Working experience on different flavors of OS (Unix/Linux/Solaris/Windows) Must Have - L2 MW – Tomcat/WebSphere/JBoss/MS IIS. Experience in administering tomcat, WebSphere, JBoss on Linux and MS IIS on windows. Good understanding of networking concepts and protocols (TCP/IP, DNS, HTTP/HTTPS) to configure network settings and troubleshoot connectivity issues and optimize middleware communication. Experience in DevOps tools like Spinnaker, Ansible, and Cloud bees. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company transforming lives and communities. To support this, we have launched a Digital Banking Service and Wallet across several European markets to enhance our customers’ experiences by offering a state-of-the-art digital Ecosystem. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for the Western Union. Learn more about our purpose and people at https://careers.westernunion.com. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-05-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JR0126094 Associate, Technology Operations – Pune, India Want to work on global strategic initiatives with a FinTech company that is poised to revolutionize the industry? Join the team and help shape our company’s digital capabilities and revolutionize an industry! Join Western Union as an Associate, Technology Operations. Western Union powers your pursuit. The Associate, Technology Operations will own end-to-end governance for solution and services delivery of Middleware Operations and Engineering for systems and applications portfolio: On-premises and on the Cloud and using insight from customers and colleagues worldwide to improve financial services for families, small businesses, multinational corporations, and non-profit organizations. Role Responsibilities To maintain and manage different Middleware Technologies such as Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP on On-prem or AWS Cloud infrastructure. Design and implement Highly Available DR solutions. Collaborate with architecture, engineering, support, teams in designing, and deploying various application solutions. Ability to handle multiple projects and deadlines in a fast-paced environment independently. Advanced troubleshooting skills: Application performance tuning, issue resolutions. Good communication and Interpersonal skills, with the ability to collaborate effectively with cross functional teams. Good delivery exposure starting from configuration, development and deployment Experience with Agile/Scrum Technologies. Design and execute upgrades and migrations including OnPrem and in Cloud. Define and manage the best practices around Application security and help to ensure security and compliance across all application systems. Providing on-call support for production systems. Continuous improvement and automation as much as possible. Communicate clearly and regularly with project teams and management. Mentor team and build cloud knowledge within the team and drive for team success. Role Requirements 3+ years in experience in working on different middleware technologies (Tibco BW BE EMS AS/Jetty/JBOSS/Tomcat Apache/IIS/WebSphere/IHS/WebLogic/IBM ACE MQ DP) Knowledge of Hawk rules, Grafana Prometheus. Familiarity with IT Service Management tools like ServiceNow. Experience with Splunk AppD, Zenoss, AWS CloudWatch, CICD tools. Strong Windows, AIX/Unix/Linus administration skills. Expert level JVM dump reading, end to end trouble shooting skills. Experience in using cloud native technologies to build applications. Strong understanding of Serverless Computing. DevOps exposure and knowledge of one or more tools such as Chef, Puppet, Jenkins, Ansible, Python. Working experience on different flavors of OS (Unix/Linux/Solaris/Windows) Must Have - L2 MW – Tomcat/WebSphere/JBoss/MS IIS. Experience in administering tomcat, WebSphere, JBoss on Linux and MS IIS on windows. Good understanding of networking concepts and protocols (TCP/IP, DNS, HTTP/HTTPS) to configure network settings and troubleshoot connectivity issues and optimize middleware communication. Experience in DevOps tools like Spinnaker, Ansible, and Cloud bees. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company transforming lives and communities. To support this, we have launched a Digital Banking Service and Wallet across several European markets to enhance our customers’ experiences by offering a state-of-the-art digital Ecosystem. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for the Western Union. Learn more about our purpose and people at https://careers.westernunion.com. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-05-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Where Data Does More. Join the Snowflake team. Snowflake’s Support team is expanding! We are looking for a Senior Cloud Support Engineer who likes working with data and solving a wide variety of issues utilizing their technical experience having worked on a variety of operating systems, database technologies, big data, data integration, connectors, and networking. Snowflake Support is committed to providing high-quality resolutions to help deliver data-driven business insights and results. We are a team of subject matter experts collectively working toward our customers’ success. We form partnerships with customers by listening, learning, and building connections. Snowflake’s values are key to our approach and success in delivering world-class Support. Putting customers first, acting with integrity, owning initiative and accountability, and getting it done are Snowflake's core values, which are reflected in everything we do. As a Senior Cloud Support Engineer , your role is to delight our customers with your passion and knowledge of Snowflake Data Warehouse. Customers will look to you for technical guidance and expert advice with regard to their effective and optimal use of Snowflake. You will be the voice of the customer regarding product feedback and improvements for Snowflake’s product and engineering teams. You will play an integral role in building knowledge within the team and be part of strategic initiatives for organizational and process improvements. Based on business needs, you may be assigned to work with one or more Snowflake Priority Support customers . You will develop a strong understanding of the customer’s use case and how they leverage the Snowflake platform. You will deliver exceptional service, enabling them to achieve the highest levels of continuity and performance from their Snowflake implementation. Ideally, you have worked in a 24x7 environment, handled technical case escalations and incident management, worked in technical support for an RDBMS, been on-call during weekends, and are familiar with database release management. AS A SENIOR CLOUD SUPPORT ENGINEER AT SNOWFLAKE, YOU WILL: Drive technical solutions to complex problems providing in-depth analysis and guidance to Snowflake customers and partners using the following methods of communication: email, web, and phone Adhere to response and resolution SLAs and escalation processes to ensure fast resolution of customer issues that exceed expectations Demonstrate good problem-solving skills and be process-oriented Utilize the Snowflake environment, connectors, 3rd party partner software, and tools to investigate issues Document known solutions to the internal and external knowledge base Report well-documented bugs and feature requests arising from customer-submitted requests Partner with engineering teams in prioritizing and resolving customer requests Participate in a variety of Support initiatives Provide support coverage during holidays and weekends based on business needs OUR IDEAL SENIOR CLOUD SUPPORT ENGINEER WILL HAVE THE FOLLOWING: Bachelor’s or Master’s degree in Computer Science or equivalent discipline. 5+ years experience in a Technical Support environment or a similar technical function in a customer-facing role. Excellent written and communication skills in English with attention to detail. Ability to reproduce and troubleshoot complex technical issues. In-depth knowledge of one of the major cloud service providers' ecosystems. ETL/ELT tools knowledge such as AWS Glue, EMR, Azure Data Factory, and Informatica. Expert working knowledge of internet protocols such as TCP/IP, HTTP/S, SFTP, and DNS as well as the ability to use diagnostic tools to troubleshoot connectivity issues. In-depth understanding of SSL/TLS handshake and troubleshooting SSL negotiation Advanced knowledge in driver configuration and troubleshooting for ODBC, JDBC, GO, and .NET. High level of proficiency with system troubleshooting on a variety of operating systems (Windows, Mac, *Nix), including many of the following tools: tcpdump, lsof, Wireshark, netstat, sar, perfmon, and process explorer. Debugging experience in Python, Java, or Scala. Experienced with software development principles, including object-oriented programming and version control systems (e.g., Git, GitHub, GitLab) Familiarity with Kafka and Spark technologies. NICE TO HAVE: Understanding of data loading/unloading process in Snowflake. Understanding Snowflake streams and tasks. Expertise in database migration processes. SQL skills, including JOINS, Common Table Expressions (CTEs), and Window Functions. Experience in supporting applications hosted on Amazon AWS or Microsoft Azure. Familiarity with containerization technologies like Docker and Kubernetes. Working experience in Data Visualization tools such as Tableau, Power BI, matplotlib, seaborn, and Plotly. Experience developing CI/CD components for production-ready data pipelines. Experience working with big data and/or MPP (massively parallel processing) databases Experienced with data warehousing fundamentals and concepts Database migration and ETL experience Familiarity with Data Manipulation and Analysis such as pandas, NumPy, scipy. Knowledge of authentication and authorization protocols (OAuth, JWT, etc.). SPECIAL REQUIREMENTS: Participate in pager duty rotations during nights, weekends, and holidays. Ability to work the 4th/night shift, which typically starts at 10 pm IST. Applicants should be flexible with schedule changes to meet business needs. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do "Eaton Corporation’s Device Integration has an opening for a Software Engineer who is passionate about his or her craft. He/She will be directly involved in the architecture, design, and development of an Internet of Things application. Although Eaton is an established company with a diverse product portfolio, this product exhibits many characteristics of a start-up initiative. This product, being developed from the ground-up, will be utilized by Eaton product teams as a framework on which to build and extend customer facing applications and solutions. So, if you’re an experienced software professional yearning to work on a project leveraging the latest software technologies and trends such as IoT, NoSQL, big data, open source, DevOps, mobile, and cybersecurity, this is the position for you! Not only will you be working with some amazing technology, you’ll also be part of an enthusiastic team of software professionals working to make an immediate organizational impact and having lots of fun along the way!" " Work with your team and others, contributing to the architecture, design, and implementation of an Internet of Things application. Development will be primarily in C#, and .NET Author high-quality, unit-tested code. Demonstrate and document solutions by using flowcharts, diagrams, code comments, code snippets, and performance instruments. Provide work estimates and participate in design, implementation, and code reviews. Execute agile work plans for iterative and incremental project delivery. Expand job knowledge by studying software development techniques and programming languages. Participate in educational opportunities and read professional publications. Work with test teams to ensure adequate and appropriate test case coverage; investigate and fix bugs; create automated test scripts." Qualifications BE/B.Tech/M.Tech/MCA " 6-9 years of progressive experience in software industry developing, designing, and deploying technology solutions shipping high quality products 5-7 yrs of experience on C# and .Net 5+ yrs of experience working with Azure" Skills " Proficient with C# and .Net Technologies and associated IDE’s (Visual Studio, Eclipse, IntelliJ, etc.) Understanding of Databases and concepts (relational and non-relational like sqlserver, cosmos, mongodb etc.) Understanding of software design principles, algorithms, data structures, and multithreading concepts Understanding of object oriented design and programming skills, including the use of design patterns Working knowledge of cloud development platforms such as Azure or AWS Working knowledge of security concepts such as encryption, certificates, and key management Working knowledge of networking protocols and concepts (http, tcp, websocket) Working knowledge of network and distributed computing concepts Experience utilizing best practices in software engineering Experience with Agile development methodologies and concepts Strong problem solving and software debugging skills Knowledge of CI/CD concepts, tools, and technologies Knowledge of field bus protocols like Modbus TCP/RTU will be an added advantage. " " Excellent verbal and written communication skills including the ability to effectively explain technical concepts Very good at understanding and prioritizing tasks, issues and work "

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do Eaton Life Safety Division is looking for a talent who can develop next-generation technology solutions that change how users connect, explore, and interact with our devices. As a member of a creative, motivated, and talented team, we need versatile engineers who are passionate to tackle new problems as we continue to push technology forward. If you get excited about building new things and modernizing existing things, then our team is your next career step. Position’s main focus will be Software development for Life Safety products. Enrich & foster climate of innovation to drive growth & accelerate capability development. This include As the role of Sr. Engg Firmware, you will work in domain of embedded firmware development for edge devices for Life safety systems and will be part of diverse team of talented cross functional team that focuses on complete product development. You will work with Product Owner to set your priorities, ensuring the achievement of deliverables and other milestones, collaborate with other engineering and product teams in the company, establish a technical growth path and improve the way we deliver value to our customers. Your contribution to group will be part of a global organization focused on providing state of the art Life Safety products in Emergency lighting. You have strong written and verbal communication skills and the ability to handle multiple concurrent projects and tasks while adapting quickly to changing priorities. You thrive in a growth-oriented organization. You are passionate empathetic to customer needs, growing people and the organization. Setting your team and peers up for success through collaboration and feedback is paramount. Someone who is Passionatate, who understands and inculcate Ethical values with Transparency. He/She is a quick Learner to upcoming technical domains and believes in leading the team from front. He/She is Efficient and takes Accountability to assigned charter which fits in to overall Goals of Embedded domain which comprise of embedded software, connectivity, industrial networking and Internet of things (IoT) technologies and solutions. Qualifications Contribute to the software development efforts through the design and implementation of world class, high-performance firmware based on Linux. Defines a structured software solution that meets the technical requirements and interfaces while optimizing performance, security & reusability. Translates requested business features into technical requirements and acceptance criteria used to direct development team and determine implementation completion Work directly with stakeholders, engineering and test to create high quality products that solve customer problems. Develop and execute plans for incremental and interactive project delivery. Author high-quality unit-tested code and provide software re-use opportunities. Work with test team to ensure adequate and appropriate test case coverage with defined Software quality metrics. Skills Bachelor’s Degree in Computer Science/Electrical/Electronics Engineering from an accredited institution required. 3+ years of hands-on expertise in C, C++, Linux based firmware applications on microcontrollers. Experience in development of industrial fieldbus communication protocols like EtherNet/IP, Modbus TCP, Webserver. Experience in RTOS platform, address Cyber-security requirements, fix the issues reported by static code anaysis tools like Coverity or parasoft. Hands-on experience in developing application firmware for user interfaces & writing Unit Tests. Experience in working in Agile Team & usage of tools like JIRA, BitBucket, JAMA. Hands-on experience on developing test automations will be added advantage. Excellent interpersonal and communication skills, particularly with respect to written and oral communication, including the ability to explain technical concepts. Sound Knowledge and experience with process frameworks (e.g. CMMI) including requirements management, defect tracking, build management, change management, and configuration management tools Experience collaborating with local / global suppliers for Hardware/Software procurement, building test lab setups etc. Experience in performing Software FMEA Excellent communications skills. A high degree of aptitude, creativity is required. Understanding of OPC-UA, MQTT, PROFINET, EtherCAT.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Colt provides network, voice and data centre services to thousands of businesses around the world, allowing them to focus on delivering their business goals instead of the underlying infrastructure. Why we need this role This role is critical to protecting both internal telecom infrastructure and customer-facing security services. It ensures the secure deployment and management of technologies across backbone, edge, and cloud environments, while supporting the delivery and integration of managed security solutions for customers. The role plays a key part in incident response, vulnerability management, and maintaining robust security standards. By collaborating across engineering, operations, and product teams, it helps embed security into every layer of the network and service lifecycle, ensuring resilience, compliance, and customer trust. What You Will Do Security Product Engineering (Customer-Facing Focus) Support deployment and integration of customer security products such as managed firewalls, SD-WAN, SASE platforms, and DDoS mitigation solutions. Perform configuration, troubleshooting, and tuning of security services in customer environments. Assist in onboarding, proof-of-concept testing, and support transitions to operations for customer security services. Work with solution architects to operationalize and maintain secure design patterns and templates. Infrastructure Security (Internal Focus) Deploy and manage security technologies across the telecom backbone, edge, and data centre infrastructure (e.g., firewalls, IDS/IPS, SIEM, PAM, NAC). Collaborate with network and systems teams to secure IP/MPLS transport, SDN platforms, automation tools, and cloud workloads. Monitor and analyse security events and alerts, responding to incidents and escalating as appropriate. Assist with vulnerability assessments, patch management validation, and configuration hardening. Document and maintain infrastructure security standards, configurations, and runbooks. Support & Collaboration Participate in security incident response, root cause analysis, and remediation efforts. Provide input on threat modelling, security testing, and design reviews for internal and external services. Stay current on security threats, tooling, and telecom-relevant vulnerabilities. Collaborate cross-functionally with engineering, operations, product, and customer support teams. What We're Looking For Must haves 3–7 years of experience in security engineering and/or network engineering Solid understanding of TCP/IP, routing, firewalls, VPN, and network segmentation principles. Hands-on experience with security tools such as firewalls (Fortinet, Palo Alto, etc.), SIEM/SOAR, IDS/IPS, EDR, or vulnerability scanners. Familiarity with Linux, scripting (Python, Bash), and infrastructure-as-code concepts. Knowledge of secure configuration standards (e.g., CIS benchmarks) and common protocols (e.g., BGP, DNS, SNMP). Might haves Experience supporting or delivering telecom or ISP infrastructure. Exposure to customer-facing security services or managed security environments. Familiarity with regulatory and industry standards (e.g., NIST, ISO 27001, UK TSA). Certifications such as Security+, GSEC, GCIA, or equivalent are a plus. Telecom or carrier experience strongly preferred Skills Cyber Security Architecture IT Architecture Methodologies Cyber Security Tools/Products Cyber Security Planning Security Compliance Education A Masters of Bachelors degree such as Computer Science, Information Security or related field What We Offer You Looking to make a mark? At Colt, you’ll make a difference. Because around here, we empower people. We don’t tell you what to do. Instead, we employ people we trust, who come together across the globe to create intelligent solutions. Our global teams are full of ambitious, driven people, all working together towards one shared purpose: to put the power of the digital universe in the hands of our customers wherever, whenever and however they want. We give our people the opportunity to inspire and lead teams, and work on projects that connect people, cities, businesses, and ideas. We want you to help us change the world, for the better. Diversity and inclusion Inclusion and valuing diversity of thought and experience are at the heart of our culture here at Colt. From day one, you’ll be encouraged to be yourself because we believe that’s what helps our people to thrive. We welcome people with diverse backgrounds and experiences, regardless of their gender identity or expression, sexual orientation, race, religion, disability, neurodiversity, age, marital status, pregnancy status, or place of birth. Most Recently We Have Signed the UN Women Empowerment Principles which guide our Gender Action Plan Trained 60 (and growing) Colties to be Mental Health First Aiders Please speak with a member of our recruitment team if you require adjustments to our recruitment process to support you. For more information about our Inclusion and Diversity agenda, visit our DEI pages. Benefits Our benefits support you through all parts of life, for both physical and mental health. Flexible working hours and the option to work from home. Extensive induction program with experienced mentors and buddies. Opportunities for further development and educational opportunities. Global Family Leave Policy. Employee Assistance Program. Internal inclusion & diversity employee networks. A global network When you join Colt you become part of our global network. We are proud of our colleagues and the stories and experience they bring – take a look at ‘Our People’ site including our Empowered Women in Tech.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Location: Mumbai, India Domain Focus: High-Frequency Trading (HFT), Ultra-Low Latency Systems Experience: 4+ years in C++ development in a low-latency/HFT. International Talent: Yes, international applicants are encouraged. Are you a highly skilled C++ developer driven to build and optimize systems at the cutting edge of technology? This is your chance to apply your passion to create a tangible impact. We’re exploring opportunities for talented professionals to join the dynamic HFT community in Mumbai. 🔷 The Profile – Your Expertise We’re recruiting for natural problem solvers, driven by a deep curiosity and a commitment to excellence. You’ll be hands-on, have a great track record and see yourself as an architect of solutions. Career Path: Consistently solved complex problems, optimized performance and reduced latency. Deep expertise in building and maintaining performance-critical applications, in the financial industry. Projects And Performance: Demonstrable track record in developing and optimizing ultra-low latency trading systems. Able to quantify past achievements (e.g., improving system latency by a specific percentage). Implemented specific risk management techniques. Professional Skills: Able to translate complex requirements into efficient solutions. Expertise in project management and communicating complex concepts. Technical Prowess: Proficiency in network programming (TCP/IP, UDP). Deep knowledge of multithreading, concurrency, and synchronization primitives. Experience with lock-free data structures and algorithms. Familiarity with financial market concepts and trading protocols (e.g., FIX). Qualifications, Licenses And Academic Achievements: Strong academic background in Computer Science or a related field (e.g., B.E., M.S.). Relevant professional qualifications or published research are a significant plus. 🔷 Who You Are And What We Need Thrives in multidisciplinary environments. Highly motivated, self-directed, and flourish in fast-paced, results-driven settings. Possess a strong work ethic, ownership mentality, and are motivated by high-stakes intellectual challenges. Has a sharp analytical mindset, adapts quickly to new technologies, and tackles complex problems. If you're ready to lead with conviction and build something enduring, we want to hear from you. Apply Above Or Connect Directly: info@aaaglobal.co.uk | www.aaaglobal.co.uk Discreet conversations are always welcome (if concerned contact us directly)

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Minimum qualifications: Bachelor's degree in Network Engineering or Telecom Engineering, a related technical field, or equivalent practical experience. 2 years of experience in telecommunications at carrier scale working with optical network infrastructure, transmission systems, layer2/3 routers, and data services. Experience working with field operation technicians, engineers, contractors, or vendors in a telecommunications environment. Experience in network operations, systematic troubleshooting in IP and Optical environment. Preferred qualifications: Experience in one or more of the following disciplines: Layer 1 optical transmissions systems, layer 3 routing TCP/IP, wireless networking, network design, or operations. Knowledge of NOC/TAC environment managing large-scale optical networks. Knowledge of TCP/IP fundamentals, Layer 2 and Layer 3 network configurations and network routing protocols (e.g., OSPF, IS-IS, BGP, MPLS). Ability to participate in shift work in a network operations center environment. Ability to employ troubleshooting methodology and use of creative problem solving under pressure. About the job As a Network Implementation Engineer, you will be the initial point of our efforts to execute deployment, maintenance, and operations of private data networks worldwide. You will work with Technical Program Managers, Network Engineers, Design and Infrastructure Engineers, Field Engineers within Google, as well as construction and telecommunications vendors and contractors, all to position your team and organization for success. You will facilitate faster, better, and more efficient, positive outcomes for the business and our customers. Your objective will be to build the world’s most reliable, cost-effective and scalable network to support all of our current and future customers and users globally. Google's network provides services to millions of Internet users around the world. Our metros are on the edge of our network where Google connects to its users. The Network Team is responsible for operating that network reliably and at scale. Our team owns the full life cycle of all space, power, and network assets in all of Google’s data centers and metro points of presence globally. From the foundation, we are involved from site acquisition to construction and are accountable for what space and power is delivered. We're involved in every facet of network delivery from architecture and design to installation, configuration, activation, and commissioning. Responsibilities Respond to network outages by providing a diagnosis of Layer 1 to Layer 3 network errors and initiate and complete network repairs. Use workflows and systems to diagnose optical Layer 1 and Layer 3 network issues to completion. Provide remote support for field operations related to network repairs. Participate in 24/7 global team shift rotations. Interface with high-level partners/providers to communicate repair status and updates as needed. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

13.0 years

0 Lacs

Kochi, Kerala, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Greater Bhopal Area

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Mysore, Karnataka, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Patna, Bihar, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

13.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 13.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: gRPC, Protocol Buffers, Avro, storage systems Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We’re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using cloud and open source data technologies. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipeline for near real-time ingestion of security insights and intelligence data Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their architecture. Required Skills And Experience Expertise in architecture and design of highly scalable, efficient and fault-tolerant data pipelines for near real-time and real-time processing Strong diagnostic and problem-solving skills with a demonstrated ability to invent and simplify complex problems into elegant solutions. Extensive experience designing high-throughput, low-latency data services using gRPC streaming and performance optimizations Solid grasp of serialization formats (e.g., Protocol Buffers, Avro), network protocols (e.g., TCP/IP, HTTP/2), and security considerations (e.g., TLS, authentication, authorization) in distributed environments. Deep understanding of distributed object storage systems like S3, GCS etc, including their architectures, consistency models, and scaling properties. Deep understanding of data formats like Parquet, Iceberg etc, optimized partitioning, sorting, compression, and read performance. Expert level proficiency in Golang, Java, or similar languages with strong understanding of concurrency, distributed computing and system-level optimizations Cloud-native data infrastructure experience with AWS, GCP etc is a huge plus Proven ability to influence technical direction and communicate with clarity. Willingness to work with a globally distributed team in different time zones. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies