Jobs
Interviews

5008 Latency Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

As industries race to embrace AI, traditional database solutions fall short of rising demands for versatility, performance, and affordability. Couchbase is leading the way with Capella, the developer data platform for critical applications in our AI world. By uniting transactional, analytical, mobile, and AI workloads into a seamless, fully managed solution, Couchbase empowers developers and enterprises to build and scale applications with unmatched flexibility, performance, and cost-efficiency—from cloud to edge. Trusted by over 30% of the Fortune 100, Couchbase is unlocking innovation, accelerating AI transformation, and redefining customer experiences. Come join our mission. Software Engineer If you are passionate about designing and developing low-latency and highly scalable distributed systems, this is the job for you. As a Software Engineer , you will design and develop features and enhancements to the core database platform. You will work on technically challenging problems on a regular basis and will contribute to the cutting edge technologies needed by a modern, distributed, No-SQL database. In this job, you will also contribute to the modern day cloud database and solve problems related to vector indexing , scaling, stability etc. You will get an opportunity to make a significant impact on the design and architecture of Couchbase’s next generation cloud database. With the increasing demand for AI applications in the market, you will contribute to the AI-enablement of the core database engine to support modern day AI applications. Responsibilities Contribute to the core database platform with features, enhancements and customer facing initiatives. Deliver the tasks end to end, starting from requirements gathering to handover to QA, Support and Field teams. Work closely with the customer support team to help with customer success. Actively contribute to fixing bugs, implementing improvements and providing workarounds to the customer issues. Take full ownership of the tasks while ensuring the timely delivery. Be a good team player and work together with team members to successfully deliver on the tasks. Write best quality code adhering to open source coding standards. Requirements 3+ years of experience in backend development. Strong understanding of multithreading and concurrent programming. Good fundamental knowledge of OS, Networks and system programming. Ability to work on critical customer cases, and quickly fix customer issues. A strong grasp over one of the backend programming languages is required - languages like C/C++, Python, Java, Golang. Inclination to the field of database internals, database platform and cloud platform. Working knowledge of AI/ML concepts and technologies. Passionate and a high performance individual, who is eager to learn and contribute. Minimum Qualification Bachelor’s degree in Computer Science and Engineering Why Couchbase? Benefits Modern customer experiences need a flexible cloud database platform that can power applications spanning from cloud to edge and everything in between. Couchbase’s mission is to simplify how developers and architects develop, deploy and consume modern applications wherever they are. We have reimagined the database with our fast, flexible and affordable cloud database platform Capella, allowing organizations to quickly build applications that deliver premium experiences to their customers– all with best-in-class price performance. More than 30% of the Fortune 100 trust Couchbase to power their modern applications and build innovative new ones. See our recent awards to learn why Couchbase is a great place to work.We are honored to be a part of the Best Places to Work Award for the Bay Area and the UK. Couchbase offers a total rewards approach to benefits that recognizes the value you create here, so that you in turn may best serve yourself and your family. Some benefits include: Generous Time Off Program - Flexibility to care for you and your family Wellness Benefits - A variety of world class medical plans to choose from, along with dental, vision, life insurance, and employee assistance programs* Financial Planning - RSU equity program*, ESPP program*, Retirement program* and Business Travel Insurance Career Growth - Be valued, Create value approach Fun Perks - An ergonomic and comfortable in-office / WFH setup. Food & Snacks for in-office employees. And much more! Note: some programs are not applicable to all countries. Please discuss with a Couchbase recruiter to learn more. Learn More About Couchbase News and Press Releases Couchbase Capella Couchbase Blog Investors Disclaimer Couchbase is committed to being an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Join an impact initiative group and experience the amazing feeling of Couchbase can-do culture. By using this website and submitting your information, you acknowledge our Candidate Privacy Notice and understand your personal information may be processed in accordance with our Candidate Privacy Notice following guidelines in your country of application.

Posted 2 weeks ago

Apply

0.0 - 10.0 years

20 - 45 Lacs

Bengaluru, Karnataka

On-site

12+ years of experience with Linux device driver development, preferably with a focus on PCIe devices. Open Source Contribution: Experience contributing to the Linux kernel or relevant open-source projects is highly valued. Expertise in C Language: Mastery of C for low-level, performance-sensitive code, including bitwise operations, memory management, pointer arithmetic, and data structure optimization. Familiarity with C++: Advantageous for certain projects, though most Linux kernel drivers are written in C. Understanding object-oriented concepts in a C context is also helpful. Deep Understanding of Linux Kernel Architecture: Familiarity with kernel space versus user space, kernel modules, device driver concepts, and memory management. Kernel Module Development: Experience writing loadable kernel modules (LKMs) and integrating them with the Linux build system. Debugging and Profiling: Proficiency with debugging tools such as kgdb, ftrace, perf, dmesg, and sysfs interfaces to troubleshoot and optimize drivers. Comprehensive Understanding of PCIe Specification: Knowledge of the PCIe standard, including enumeration, configuration space, BARs (Base Address Registers), MSI/MSI-X interrupts, and bus mastering. Device Datasheet Interpretation: Ability to read and interpret PCIe device hardware documentation, including register maps, timing requirements, and signaling protocols. Interfacing with Firmware/BIOS: Understanding how PCIe devices are initialized during system boot, and the mechanisms by which firmware and BIOS communicate with hardware. Device Driver Development Lifecycle Probing and Initialization: Experience writing probe() and remove() functions to handle device enumeration and teardown. Resource Management: Skills in managing memory and hardware resources, including DMA (Direct Memory Access), I/O regions, and interrupt lines. Interrupt Handling: Ability to write efficient and robust interrupt handlers, using mechanisms such as bottom halves, tasklets, work queues, and threaded interrupts. Power Management: Familiarity with runtime and system power management interfaces, including suspend/resume operations. Concurrency and Synchronization: Understanding race conditions, atomic operations, spinlocks, mutexes, and semaphores in a preemptible kernel environment. Bachelor’s or Master’s Degree: In Computer Science, Electrical or Computer Engineering, or a related technical field. Desirable Additional Qualifications: Knowledge of Other Operating Systems: Familiarity with Windows, FreeBSD or RTOS driver models for cross-platform development. Experience with FPGA, SoC, or Custom Hardware: Useful for teams working on nonstandard PCIe endpoints or accelerators. Performance Tuning: Skills in profiling and optimizing for low-latency, high-throughput data paths. Community Engagement: Involvement in Linux kernel mailing lists, conferences (such as Linux Plumbers Conference), or speaking at industry events. Note: Please send Cv only, who can attend the interview weekdays, with short notice period (max 15 days only) Job Location: Bangalore (Work from office) Job Types: Full-time, Permanent Pay: ₹2,083,594.88 - ₹4,528,981.39 per year Benefits: Provident Fund Experience: C: 10 years (Required) Linux device driver : 10 years (Required) Location: Bangalore, Karnataka (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Key responsibilities: A. Product Strategy & Vision 1. Define and refine the product vision aligned with Onelap's mission: "To simplify how the world tracks everything that moves." 2. Identify opportunities across GPS tracking devices, SIM-based subscriptions, vehicle security, and support systems. 3. Translate business goals into product strategies that create customer value and business impact. B. Roadmap Planning & Sprint Execution 1. Own the product roadmap, prioritizing features and improvements that align with business and user needs. 2. Break down epics into actionable user stories and manage sprint progress through JIRA. 3. Ensure JIRA boards are up to date, with clear statuses, assignee ownership, and sprint burndown visibility. C. App Stability & Quality Assurance 1. Work closely with QA and engineering to monitor crash reports, bug trends, and performance bottlenecks across Android and iOS. 2. Drive initiatives to reduce app crashes, improve loading speeds, and optimize battery and network efficiency. 3. Establish and track app health metrics (ANR rates, crash-free sessions, etc.) using tools like Firebase Crashlytics or similar. D. App Ratings & User Feedback 1. Own strategies to improve app store ratings and reviews. 2. Proactively gather in-app feedback to detect friction points and resolve user pain areas. 3. Coordinate with design, support, and tech to ensure delightful experiences at key touchpoints (e.g., onboarding, activation, renewals). E. Churn Reduction & Subscription Success 1. Design nudges and renewal flows that reduce subscription churn for both app and SIM services. 2. Improve expiry communication via WhatsApp, in-app alerts, and recharge-driven reactivations. 3. Track retention metrics and identify why users uninstall or stop renewing. F. Customer Experience & Returns 1. Analyze return and refund trends to identify root causes poor experience, app issues, product confusion, etc. 2. Partner with operations to reduce return rates by improving clarity in product packaging, setup instructions, and feature expectations. 3. Launch experience improvements (e.g., simplified QR activation, real-time location accuracy) to reduce dissatisfaction. G. User & Market Insights 1. Deep-dive into usage patterns, feature adoption, and heatmaps to guide decisions. 2. Engage with resellers, fleet managers, and vehicle owners to understand their evolving needs. 3. Monitor competition to benchmark features, pricing, and user journeys. H. Operational & Technical Collaboration 1. Partner with engineering on SIM mapping automation, tracker offline issues, and backend optimizations. 2. Ensure scalable, secure, and low-latency performance across services. 3. Align with the customer support team to convert frequently asked queries into in-app solutions or self-service flows. I. Team Communication & Cross-Functional Alignment 1. Act as the voice of the product across departments founders, tech, marketing, support, design. 2. Share clear product documentation, progress reports, and milestone updates. 3. Drive disciplined decision-making and sprint velocity through JIRA-based rituals and retrospectives.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Machine Learning Engineer (T25), Product Knowledge Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 2+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data and computation of statistics Experience in using and operating Big Data processing pipelines, such as: Hadoop and Spark Good verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a highly capable Data Platform Engineer to build and maintain a secure, scalable, and air-gapped-compatible data pipeline that supports multi-tenant ingestion, transformation, warehousing, and dashboarding . You’ll work across the stack: from ingesting diverse data sources (files, APIs, DBs), transforming them via SQL or Python tools, storing them in an OLAP-optimized warehouse, and surfacing insights through customizable BI dashboards. Key Responsibilities: 1. Data Ingestion (ETL Engine): Design and maintain data pipelines to ingest from: File systems: CSV, Excel, PDF, binary formats Databases: Using JDBC connectors (PostgreSQL, MySQL, etc.) APIs: REST, XML, GraphQL endpoints Implement and optimize: Airflow for scheduling and orchestration Apache NiFi for drag-and-drop pipeline development Kafka / Redis Streams for real-time or event-based ingestion Develop custom Python connectors for air-gapped environments Handle binary data using PyPDF2 , protobuf , OpenCV , Tesseract , etc. Ensure secure storage of raw data in MinIO , GlusterFS , or other vaults 2. Transformation Layer: Implement SQL/code-based transformation using: dbt-core for modular SQL pipelines Dask or Pandas for mid-size data processing Apache Spark for large-scale, distributed ETL Integrate Great Expectations or other frameworks for data quality validation (optional in on-prem) Optimize data pipelines for latency, memory, and parallelism 3. Data Warehouse (On-Prem): Deploy and manage on-prem OLAP/RDBMS options including: ClickHouse for real-time analytics Apache Druid for event-driven dashboards PostgreSQL , Greenplum , and DuckDB for varied OLAP/OLTP use cases Architect multi-schema / multi-tenant isolation strategies Maintain warehouse performance and data consistency across layers 4. BI Dashboards: Develop and configure per-tenant dashboards using: Metabase (preferred for RBAC + multi-tenant) Apache Superset or Redash for custom exploration Grafana for technical metrics Embed dashboards into customer portals Configure PDF/Email-based scheduled reporting Work with stakeholders to define marketing, operations, and executive KPIs Required Skills & Qualifications: 5+ years of hands-on experience with ETL tools , data transformation , and BI platforms Advanced Python skills for custom ingestion and transformation logic Strong understanding of SQL , data modeling , and query optimization Experience with Apache NiFi , Airflow , Kafka , or Redis Streams Familiarity with at least two: ClickHouse , Druid , PostgreSQL , Greenplum , DuckDB Experience building multi-tenant data platforms Comfort working in air-gapped / on-prem environments Strong understanding of security, RBAC , and data governance practices Nice-to-Have Skills: Experience in regulated industries (BFSI, Telecom, government) Knowledge of containerization (Docker/Podman) and orchestration (K8s/OpenShift) Exposure to data quality and validation frameworks (e.g., Great Expectations) Experience with embedding BI tools in web apps (React, Django, etc.) What We Offer: Opportunity to build a cutting-edge, open-source-first data platform for real-time insights Collaborative team environment focused on secure and scalable data systems Competitive salary and growth opportunities

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. The Netskope Risk Insights team delivers complex, distributed and hybrid cloud systems that provide customers with a multidimensional view of the applications, devices and users on their network. We illuminate unsanctioned and unsupported applications and devices and track user behavior so customers can visualize and create policies to minimize risk. Risk analysis, policy enforcement, compliance and audit mechanisms are some customer use cases we satisfy. The team owns a scalable, cloud-managed on-prem and public cloud platform that hosts customer facing data plane and log parsing services. Thus, extending the Netskope cloud to the customer data center and public cloud environments. What’s In It For You In the Risk Insights team, you will wear multiple hats of manager, technical leader, influencer, coach and mentor. You will lead a growing engineering team focused on building cloud and on-prem services in the SaaS and IaaS space. Be an owner and influencer in a geographically distributed organization, partner with the Netskope business and interact with external customers. Opportunities to bridge the gap between cloud and on-prem solutions. What You Will Be Doing Lead, coach, mentor and inspire a team fostering a culture of ownership, trust, innovation and continuous improvement. Drive the strategic direction and technical leadership of the team Collaborate across organizations to ensure creation of a shared vision, roadmap and timely delivery of products/services Be accountable for a high quality software lifecycle Hands-on participant in technical discussions, designs and code development Build microservices, abstraction layers and platforms to make the hybrid and multi-cloud footprint low latency, cost efficient and customer friendly Required Skills And Experience 15+ years of demonstrable experience building services/products with 10+ years in a management role Effective verbal and written communication Demonstrable experience balancing business and engineering priorities Proven history of building customer focused high performing teams Comfortable with ambiguity and taking initiative to find and solve problems Experience contributing in a geographically distributed environment preferred Good understanding of distributed systems, data-structures and algorithms Strong background in designing scalable services with monitoring and alerting systems Programming in Python, Go, C/C++, etc. Experience working with CI/CD environments, automation frameworks Experience in designing and building RESTful services Knowledge of network security, databases, authentication & authorization mechanisms, messaging technologies, HTTP, TCP, Familiarity with file systems and data storage technologies (e.g. ceph, NFS, etc.) Experience building and debugging software on Linux platforms Experience working with docker, kubernetes (K8s), public clouds (AWS, GCP, Azure, etc.) Education B.tech or equivalent required, Master's or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets . Engineering, which is comprised of our Technology Division and global strategists’ groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Goldman Sachs Asset And Wealth Management Division As one of the world's leading asset managers, our mission is to help our clients achieve their investment goals. To best serve our clients' diverse and evolving needs, we have built our business to be global, broad, and deep across asset classes, geographies, and solutions. Within the Multi Asset Solutions (MAS) investing group, we seek to develop a first-in-class digital advice platform across 401(k), IRA, and brokerage accounts, to provide individuals with custom tailored investment strategies to meet their retirement objectives. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. How will you fulfill your potential? Help us evolve our continuous integration and deployment infrastructure Create and maintain system design and operation documentation Help design and implement our container orchestration strategy Help design and implement our artifact management strategy Basic Qualifications Experience designing and implementing continuous integration and deployment frameworks Experience working through the SDLC, building and promoting applications from development all the way to production DevOps/Infrastructure design and management experience at an enterprise level using infrastructure as code (AWS CDK, Terraform, or Ansible) Experience building and supporting multi-tier production applications running at AWS or another cloud provider Experience working alongside application teams to develop and deploy infrastructure Experience with source control system (GitLab or GitHub) Preferred Qualifications AWS certification(s) Bachelors or Masters in Computer Science or related field or equivalent experience 2+ years experience Experience working in a heavily regulated or financial industry Experience with cloud migration Experience working with containers Management of dedicated artifact systems About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers . We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer

Posted 2 weeks ago

Apply

2.0 years

5 - 6 Lacs

Delhi, India

Remote

Job Description Job Title: React Native Developer Reporting Manager: Team Lead Location: India Company SegWitz is a Disruptive Innovation Partner working with enterprises in developing and integrating software for unlocking business potential through digitalization and strategic planning for exponential growth. We gather great minds in tech to build a world-class team of thinkers, innovators, and leaders. We focus on creating an unforgettable digital experience through ultra-convenience, customer satisfaction, accessibility, and transparency in the consumer market. Requirements, Skills & Qualifications 2-4 years of experience with React Native itself Has own Mac system to work remotely Can start working with us immediately Firm grasp of the JavaScript and TypeScript language and its nuances, including ES6+ syntax Knowledge of functional or object-oriented programming Experience with redux, redux saga and middlewares Ability to write well-documented, clean Javascript code Rock solid at working with third-party dependencies and debugging dependency conflicts Familiarity with native build tools, like XCode, Gradle Android Studio, IntelliJ Understanding of REST APIs, the document request model, and offline storage Experience with automated testing suites, like Jest or Mocha Duties & Responsibilities Write reusable, testable, and efficient code. Should be able to design and implement low-latency, high-availability, and high-performance applications. Architect & design technically robust, flexible and scalable solutions Build pixel-perfect, amazingly smooth UIs across both mobile platforms. Diagnose and fix bugs and performance bottlenecks for performance that feels native. Reach out to the open source community to encourage and help implement mission-critical software fixes. Maintain code and write automated tests to ensure the product is of the highest quality. Work with third-party dependencies and debugging dependency conflicts. Understand REST APIs, the document request model, and offline storage

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At Qube Cinema, technology and storytelling come together to create world-class cinematic experiences using In-Camera VFX (ICVFX) and real-time virtual production workflows. Our state-of-the-art LED volume stage is the heart of our production pipeline. We’re looking for a Volume Head to lead and manage this critical piece of our virtual production infrastructure. This is a senior, multidisciplinary leadership role requiring a deep understanding of real-time rendering, on-set production, LED volume systems, and collaborative team management. As Volume Head, you will be responsible for end-to-end operations of the LED volume stage — from technical setup and calibration to on-set execution, content playback, and real-time troubleshooting. You will work closely with Directors, DoPs, Production, VAD, and technical teams to ensure that the volume delivers on the creative vision while maintaining efficiency, consistency, and technical excellence. What you will be responsible for: Stage Operations & Oversight Oversee daily operations of the LED volume, ensuring technical and creative readiness for each shoot Supervise screen calibration, content playback systems, camera tracking, and real-time sync across systems Manage volume crew including system techs, playback operators, tracking supervisors, and media wranglers Serving as a front-line support representative of Qube on the field to clients, vendors, and other partners Cross-Department Collaboration Work closely with the Director of Photography, Virtual Art Department (VAD), and Production Designers to ensure environments are optimized for camera and lighting Interface with the Unreal Engine team for environment testing and approvals Coordinate with the ICVFX Supervisor and production team to ensure pre-shoot workflows are followed Participate in any required client or market-specific training, calls, meetings, etc. Technical Leadership Own and optimise the volume pipeline including tracking systems, media servers, genlock/timecode sync, etc. Maintain a deep understanding of render performance, latency management, Display configuration, and physical volume hardware Collaborate with engineering and R&D teams to implement upgrades and new capabilities Planning & Scheduling Manage shoot schedules, tech prep days, and volume resets in collaboration with Production and Line Producers Evaluate scene complexity and ensure all technical requirements (tracking, playback, lighting integration) are accounted for Create and manage shot-specific volume configurations Quality Assurance & Troubleshooting Monitor output quality during rehearsals and takes; quickly troubleshoot any on-set issues Maintain show continuity logs for playback, calibration states, and tracking environments Maintain accurate records for assets, appliances, maintenance logs and preventive maintenance Enforce best practices for data integrity, content versioning, and screen health Team Management & Training Lead, train, and mentor a team of operators and technicians for ongoing stage operations Work with HR/TechOps to recruit and onboard freelance or full-time volume crew Foster a safe, collaborative, and high-performance on-set environment What we are looking for: Experience: 6–10 years in film/TV production, with 3+ years in LED volume/virtual production leadership. Technical Expertise: Deep knowledge of real-time rendering (Unreal Engine), camera tracking systems, LED hardware, media servers and timecode sync. On-Set Experience: Strong familiarity with cinematography, lighting for LED volumes, and the pace of physical production environments. Leadership: Proven experience managing cross-functional teams in high-pressure situations. Workflow Knowledge: Understanding of the full virtual production pipeline including VAD handoff, content testing, playback integration, and ICVFX best practices. Problem Solving: Calm under pressure with a solutions-first mindset and ability to troubleshoot technical and creative issues in real time. Communication: Excellent interpersonal skills to manage expectations across departments and clearly articulate needs, risks, and timelines.

Posted 2 weeks ago

Apply

25.0 years

0 Lacs

New Delhi, Delhi, India

On-site

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is looking for a passionate member to join our DGX Cloud Engineering Team as a Sr. Site Reliability Engineer. In this role, you will play a significant part in helping to craft and guide the future of AI & GPUs in the Cloud. NVIDIA DGX Cloud is a cloud platform tailored for AI tasks, enabling organizations to transition AI projects from development to deployment in the age of intelligent AI. Are you passionate about cloud software development and strive for quality? Do you pride yourself in building cloud-scale software systems? If so, join our team at NVIDIA, where we are dedicated to delivering GPU-powered services around the world! What You'll Be Doing You will play a crucial role in ensuring the success of the Omniverse on DGX Cloud platform by helping to build our deployment infrastructure processes, creating world-class SRE measurement and creating automation tools to improve efficiency of operations, and maintaining a high standard of perfection in service operability and reliability. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Work closely with other teams on new products or features/improvements of existing products. Develop, maintain and improve cloud deployment of our software. Participate in the triage & resolution of complex infra-related issues Collaborate with developers, QA and Product teams to establish, refine and streamline our software release process, software observability to ensure service operability, reliability, availability. Maintain services once live by measuring and monitoring availability, latency, and overall system health using metrics, logs, and traces Develop, maintain and improve automation tools that can help improve efficiency of SRE operations Practice balanced incident response and blameless postmortems Be part of an on-call rotation to support production systems What We Need To See BS or MS in Computer Science or equivalent program from an accredited University/College. 8+ years of hands-on software engineering or equivalent experience. Demonstrate understanding of cloud design in the areas of virtualization and global infrastructure, distributed systems, and security. Expertise in Kubernetes (K8s) & KubeVirt and building RESTful web services. Understanding of building AI Agentic solutions preferably Nvidia open source AI solutions. Demonstrate working experiences in SRE principles like metrics emission for observability, monitoring, alerting using logs, traces and metrics Hands on experience working with Docker, Containers and Infrastructure as a Code like terraform deployment CI/CD. Exhibit knowledge in concepts of working with CSPs, for example: AWS (Fargate, EC2, IAM, ECR, EKS, Route53 etc...), Azure etc. Ways To Stand Out From The Crowd Expertise in technologies such as Stack-storm, OpenStack, Redhat OpenShift, AI DBs like Milvus. A track record of solving complex problems with elegant solutions. Prior experience with Go & Python, React. Demonstrate delivery of complex projects in previous roles. Showcase ability in developing Frontend application with concepts of SSA, RBAC We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. JR2000387

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

When you join Proclink, you will be working for a young and growing credit card business to be the Principal Architect for a customer lifecycle stage (Customer acquisitions or Customer management or Collections). You will be deeply involved with the business and technical stakeholders in comprehending the business needs, reviewing & discussing the PRDs, translating the PRDs to design, ARD, engineering project plan, Jira tickets, test cases, implementation plan. You will be working closely with and guiding the offshore as well as onshore engineering team to work through the sprints and all the way to UAT & post-prod support. You will have to talk to the external parties – such as product / platform vendors, data providers, partners to the business to flesh out the details on product specs, data formats, latency, constraints etc. to ensure that the integration / customization happens according to the requirements and in favor of the client’s tech environment. You should be ready to work on new architectures / design patterns to ensure scalability and efficiency. While you may not code, you will definitely need to be able to guide the team, review their work and defend your team’s work in front of the senior engineering executives, including the CTO. You will lead technical reviews, code assessments, and solution designs to maintain high-quality product delivery. Ensure that the production applications are stable and operating as expected. Contribute to the client’s strategic technology planning for the cards business. It is not necessary that you have worked in the past in credit cards or banking, but such experience will surely be a plus. If you are a person that waits for instructions and hesitates to take initiatives, does not have a well-thought out opinion, not a go-getter, not a networker, please do not apply. This is US based client so you should expect reasonable overlap with the EST hours. Job specification: You should have spent at least 15 years in the Technology industry, working with COEs or in GCCs or product companies; should have worked with global clients in offshore-onshore environment. You should have interacted directly with the business stakeholders, peers in the technology teams, C suite executives in your career. You should be proud of a few major initiatives you have taken and a few projects you have led. Experience in full-stack web application development experience across frontend, backend, and infrastructure, and have a solid understanding of technical fundamentals. Advanced knowledge of Object-Oriented Design, Microservices, Service Oriented Architecture and Application Integration Design Patterns Solution architecture, Systems Design, Design Patterns, and frameworks implementation knowledge for enterprise solutions. Expert in MicroServices, Containerization. Should have experience in one of the UI technologies like Angular, React, HTML5,JavaScript,CSS,BootStrap. Should know about designing and implementing secure solutions. Experience making architecture-level decisions that span teams, applications, and technologies with demonstrable improvements in the quality and speed of an engineering organization’s output Strong track record of recruiting and retaining high-performing engineering talent. Strong command of verbal and written communication to drive alignment and collaborate across functional teams. Ability to interface with and influence leaders across an organization with poise Competency to foster and build a culture of success, accountability, and teamwork Experience in guiding the development of observable systems with robust metrics and alerts Ability to navigate in a nimble environment and drive success in unknown territory Minimum of undergrad degree in Computer Science or a related field Core Tech Stack: Node, TypeScript, JavaScript, AngularJS, RESTful APIs, Micro Services, AWS, Docker, Kubernetes, Agile and SCRUM. You should be ready to work from our Hyderabad office. There will be travel – both within India and to client locations, but not more than 15-20%. Interested can share resume with Anusha.patel@proclink.com.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company We are seeking a skilled Snowflake Developer with 5+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. About the Role This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Responsibilities Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Qualifications 5+ years in database development, data warehousing, or ETL. 3+ years of hands-on Snowflake development experience. Strong SQL or Python skills for data processing. Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). Certifications: SnowPro Core Certification (preferred). Preferred Skills Familiarity with data governance and metadata management.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Mizuho Global Services Pvt Ltd (MGS) is a subsidiary company of Mizuho Bank, Ltd, which is one of the largest banks or so called ‘Mega Banks’ of Japan. MGS was established in the year 2020 as part of Mizuho’s long-term strategy of creating a captive global processing center for remotely handling banking and IT related operations of Mizuho Bank’s domestic and overseas offices and Mizuho’s group companies across the globe. At Mizuho we are committed to a culture that is driven by ethical values and supports diversity in all its forms for its talent pool. Direction of MGS’s development is paved by its three key pillars, which are Mutual Respect, Discipline and Transparency, which are set as the baseline of every process and operation carried out at MGS. What’s in it for you? o Immense exposure and learning o Excellent career growth o Company of highly passionate leaders and mentors o Ability to build things from scratch Know more about MGS : - https://www.mizuhogroup.com/asia-pacific/mizuho-global-services Job Description : The network infrastructure Specialist is responsible for designing, implementing, maintaining, and optimizing organisations network infrastructure. This includes ensuring high availability security and performance of systems to support business operations. The role requires expertise in server management, network and architecture troubleshooting and collaboration with cross-functional teams to meet organizational goals. Key responsibilities : Design implements and maintain LAN,VAN, VPN, and Wi-Fi networks. Configuring management devices such as router, switches, firewalls and access points. Monitor Network performance and troubleshoot connectivity issues. Ensure network security by implementing fireqalls, intrusion detection / prevention systems and encryption protocols. Collaborate with ISPs and vendors to ensure reliable Internet connectivity Security and compliance: Implement and enforce security policies and procedures and best practices for service and network. Conduct regular vulnerability assessments and penetration testing. Ensure compliance with industry standards and regulations (e.g. GDPR, ISO 27001). Required skills and qualifications: Strong knowledge of server operating systems like windows and Linux. Proficiency in networking protocols TCP/IP , DNS, DHCPV lens experience with virtualization technologies like VM Ware and hyper- V. Familiarity with cloud platforms like AWS Azure and Google cloud Problem-Solving: A bility to diagnose and resolve complex network issues analytical mindset with a focus on root cause analysis. Communication : Excellent verbal and written communication skills ability to explain technical concepts on non-technical stakeholders. Certifications: Cisco CCNP / CCNP CCNA CompTIA Network plus, CompTIA server + Project management : Plan and execute server and network infrastructure upgrades migrations and expansions. Collaborate with stakeholders to assess requirement and deliver solutions manage budget and timelines for infrastructure projects. Experience : 5-6 years of experience in network administration. Hands-on experience with enterprise grade infrastructure. Work environment : On site work model depending on organizational requirements. May require after hours support during emergencies or scheduled maintenance. Collaboration with IT teams vendors and external service providers. Key Performance Indicators KPI’s: Network latency and reliability. Incidence response and resolution times. Compliance with security standards. Interested candidates can send resume on mgs.rec@mizuho-cb.com along with below details. Available for F2F? Y/N Notice period ? Total & relevant experience ? Current & expected CTC ? Current residential location in Mumbai ? Address: Mizuho Global Services India Pvt. Ltd, 11th Floor, Q2 Building Aurum Q Park, Gen 4/1, Ttc, Thane Belapur Road, MIDC Industrial Area, Ghansoli, Navi Mumbai- 400710.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Ways of working: Mandate 1 – Employees will come to the office twice or thrice a week at their base location and work remotely for the remaining days. _____________________________________________________________________________________________________________________________________________ Job Profile: Principal Software Engineer - StoreFront team Location: Bangalore | Karnataka Years of Experience: 9 - 12 ABOUT THE TEAM & ROLE: Swiggy's StoreFront Engineering team helps customers enjoy personalized discovery and purchase experiences across multiple product lines (Stores, Food, Genie, and Instamart). The team is enabling this by developing thoughtfully crafted applications, smart cataloging, relevance-based search & intent-driven merchandising, checkout management solutions, and payment systems. We are looking for engineers who have hands-on experience in building highly reliable distributed systems and have deep expertise in database design & performance tuning. Knowledge of Machine Learning and other Predictive Modeling techniques will be an added strength. Principal Software Engineers in Swiggy not only contribute to the high-level architecture of several systems but also contribute to the overall success of the product by driving technology and best practices in engineering in their respective teams. They establish technology vision for respective teams and demonstrate how to solve a deeply complex and hard technical challenge, and help communicate that vision upward (CTO), inward (peers and engineering team), and outward (product & business teams). As a Senior Technical Individual Contributor, you will be responsible for Building and operating multi-tenant ordering and payment platforms that work across all business lines and subsidiaries. This needs to scale to millions of transactions per day with 99.95% uptime while allowing new business lines to be deployed safely and rapidly. Building and scaling Swiggy Money (our fast-growing wallet) across all business lines and subsidiaries. You will be owning Tier-1 services which serve millions of users on a daily basis with a throughput of 100k rps and higher. What qualities are we looking for? Technically hands-on, prior experience with scalable architecture Possess 12+ years of software engineering and product delivery experience Excellent command over data structures and algorithms Exceptional coding skills in either Java or Go Strong problem-solving and analytical skills Good knowledge of distributed technologies, real-time systems of high throughput, low latency, and highly scalable systems. Experience with high-performance product lines catering to millions of daily traffic is a plus. What will you get to do here? Define and implement a long-term technology vision for your team. Serve as a technical lead on our most demanding, cross-functional projects across the Storefront organization. Come up with best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the product/team. Decide technology & tool choices for your team. Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create. Ensure the quality of architecture and design of systems. Responsible for end-to-end architecture, high-level design/low-level design of various systems and applications that you are assigned to. You will also be responsible for writing code and deploying these enterprise applications to production, including operational excellence. Visit our tech blogs to learn more about some of the challenges we deal with: https://bytes.swiggy.com/the-swiggy-delivery-challenge-part-one-6a2abb4f82f6 https://bytes.swiggy.com/swiggy-distance-service-9868dcf613f4 https://bytes.swiggy.com/the-tech-that-brings-you-your-food-1a7926229886 We are an equal-opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability status, or any other characteristic protected by the law.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Department : Technology / AI Innovation Reports To : AI/ML Lead or Head of Data Science Location : Pune Role Summary We are looking for an experienced AI/ML & Generative AI Developer to join our growing AI innovation team. You will play a critical role in building advanced machine learning models, Generative AI applications, and LLM-powered solutions. This role demands deep technical expertise, creative problem-solving, and a strong understanding of AI workflows and scalable cloud-based deployments. Key Responsibilities Design, develop, and deploy AI/ML models and Generative AI applications for diverse enterprise use cases. Implement, fine-tune, and integrate Large Language Models (LLMs) using frameworks like LangChain, LlamaIndex, and RAG pipelines. Build Agentic AI systems with multi-step reasoning and autonomous decision-making capabilities. Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to operationalize AI solutions. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training, serving, and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development, including Generative AI applications. Expertise in RAG, LLMs, and Agentic AI implementations. Strong knowledge of LangChain, LlamaIndex, or similar LLM orchestration frameworks. Proficient in Python and key ML/DL libraries : TensorFlow, PyTorch, Scikit-learn. Solid foundation in Deep Learning, Natural Language Processing (NLP), and Transformer-based architectures. Experience in building data ingestion, indexing, and retrieval pipelines for real-world enterprise scenarios. Hands-on experience with Azure cloud services and Databricks. Proven experience designing CI/CD pipelines and working with MLOps tools like MLflow, DVC, or Kubeflow. Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills, with the ability to explain complex AI concepts to non-technical stakeholders. Strong collaboration and teamwork in agile, cross-functional environments. Growth mindset with curiosity to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases : FAISS, Pinecone, Weaviate. Experience with AutoGPT, CrewAI, or similar agent frameworks. Exposure to Azure OpenAI, Cognitive Search, or Databricks ML tools. Understanding of AI security, responsible AI, and model governance. Key Relationships Internal : Data Scientists, Data Engineers, DevOps Engineers, Product Managers, Solution Architects. External : AI/ML platform vendors, cloud service providers (Microsoft Azure), third-party data providers. Role Dimensions Contribute to AI strategy, architecture, and reusable AI components. Support multiple projects simultaneously in a fast-paced agile environment. Mentor junior engineers and contribute to best practices and standards. Success Measures (KPIs) % reduction in model development time using reusable pipelines. Successful deployment of GenAI/LLM features in production. Accuracy, latency, and relevance improvements in AI search and retrieval. Uptime and scalability of deployed AI models. Integration of responsible AI and compliance practices. Competency Framework Alignment Technical Excellence in AI/ML/GenAI Cloud Engineering & DevOps Enablement Innovation & Continuous Improvement Business Value Orientation Agile Execution & Ownership Cross-functional Collaboration (ref:hirist.tech)

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

As a Senior Software Engineer, you will play a critical role in designing, developing, and maintaining the firmware for Access points. Working closely with the Software Tech Lead, you will help build high-performance, secure and scalable networking solutions that enable enterprise-grade connectivity. This role requires strong expertise in embedded Linux, networking protocols, and system design. You will focus on developing control-plane and data-plane software, ensuring efficient packet processing, system performance, and seamless hardware integration. A strong understanding of packet flow in Linux and OpenWRT systems is essential. This is an opportunity to work at the intersection of networking, cloud, and embedded systems, making a direct impact on enterprise-grade WLAN infrastructure. Responsibilities Implement control-plan and data-plan features, ensuing efficient packet processing and network performance. Work closely with the Software Tech Lead to define system architecture, design patterns and implementation strategies. Optimize firmware to ensure low-latency, high-throughput, and reliable operation across various networking environments. Collaborate with the internal hardware and software teams to integrate various services for embedded networking systems. Develop and maintain secure, efficient, and well-documented APIs for internal and external use. Enhancing logging, monitoring, and debugging capabilities to improve system observability and troubleshooting. Participate in code reviews, technical discussions, and continuous improvement initiatives to maintain software quality. Ensure compliance with networking security standards and best practices. Contribute to automated testing, CI/CD pipelines, and system validation efforts to ensure firmware stability. Key Qualifications 5+ years of software engineering experience in Wi-Fi Access point development Experience with Wi-Fi, 802.11, WLAN and BLE protocols and chipsets Experience with Layer 2 and Layer 3 protocols at depth. Strong programming skills in C, shell scripting (Go, Rust or Zig experience is a plus). Experience working with OpenWRT and other open-source networking firmware is highly desirable. Deep understanding of Linux networking subsystems, system programming, and kernel-level development. Must have expertise in packet flow in Linux and OpenWRT systems, with a strong grasp of both control path and data path optimizations. Experience with debugging, fixing and optimizing enterprise WIFI performance is a must. Ability to troubleshoot complex networking and backend issues. Strong communication skills, with the ability to collaborate in cross-functional teams. Self-motivated with a strong sense of ownership and responsibility. Education BE or ME in EE, E&C, Computer Science. Company Statement/Values At NETGEAR, we are on a mission to unleash the full potential of connectivity with intelligent solutions that delight and protect. We turn ideas into innovative networking products that connect people, power businesses, and advance the way we live. We're a performance-driven, talented and connected team that's committed to delivering world-class products for our customers. As a company, we value our employees as the most essential building blocks of our success. And as teammates, we commit to taking our work to the Next Gear by living our values: we Dare to Transform the future, Connect and Delight our customers, Communicate Courageously with each other and collaborate to Win It Together . You’ll find our values woven through our processes, present in our decisions, and celebrated throughout our culture. We strive to attract top talent and create a great workplace where people feel engaged, inspired, challenged, proud and respected. If you are creative, forward-thinking, passionate about technology and are looking for a rewarding career to make an impact, then you've got what it takes to succeed at NETGEAR. Join our network and help us shape the future of connectivity. NETGEAR hires based on merit. All qualified applicants will receive equal consideration for employment. All your information will be kept confidential according to EEO guidelines.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Overview We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and Skills & Qualifications : Bachelors or Masters in Computer Science, AI, Machine Learning, or related field. 4+ years of hands-on experience in AI/ML solution development. Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT. Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG). Proficient in key AI libraries and frameworks : LLMs & GenAI : Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers NLP : SpaCy, NLTK. Vision : OpenCV, MMDetection, YOLOv5/v8, Detectron2 MLOps : MLflow, FastAPI, Docker, Git Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation. Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure. Strong communication skills and ability to convert business problems into technical Qualifications : Experience building multimodal systems (text + image, etc.) Practical experience with agent frameworks for autonomous or goal-directed AI. Familiarity with quantization, distillation, or knowledge transfer for efficient model Responsibilities : Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications. Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality. Build NLP solutions for Q&A, summarization, information extraction, text classification, and more. Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks. Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines. Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features. Optimize models for performance, cost-efficiency, and low latency in production. Continuously evaluate new AI research, tools, and frameworks and apply them where relevant. Mentor junior AI engineers and contribute to internal AI best practices and documentation. (ref:hirist.tech)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description The Primary responsibility of the role is to perform campaign operations to improve visibility of the content in Amazon Prime Video. The role will require the candidate to quickly understand the campaign ops tools and operation workflow tools. Associate need to continuously adapt and learn new features of the program and improve on their acumen to quickly edit and fix up contents. Associate has to follow editing SOP to spot/catch errors in the content. Associate needs to perform content quality check to qualify user experience for content viewing (flow and format quality). Associate will need to use software tools for quality audit, content editing and data capture. The associate will need to be aware of the operations metrics like productivity (Number of titles processed per hour), quality (defect %age) and delivery/latency SLA. The associate will be measured on compliance to these Metrics, SLA requirements, QA guidelines, team and personal goals. Associate should be a team player and come up with improvement ideas to their direct report and improve the editing/QA process. The associate will need to often contact stakeholders globally to provide status reports, communicate relevant information and escalate when needed. The role is an individual contributor role. The role requires a graduate degree with exposure to MS office and comfort with numbers. In addition the associate should have attention to detail, good communication skills, and a professional demeanor. The role requires the associate to be comfortable with night shift hours and flexible to extend support during critical business requirements Basic Qualifications Completed under graduation (UG) in any stream Analytical knowledge to solve basic mathematical and logical problems Candidate should be familiar with excel function. Ability to communicate effectively Strong attention to detail in editing content and deep dive and identify root causes of issues Good at problem solving, data analysis and troubleshooting issues related to content editing Preferred Qualifications Ability to meet deadlines in a fast paced work environment driven by complex software systems and processes Self starter, good team player Good interpersonal skills to manage ongoing relationships with program team and inter operations teams Working knowledge of XML standards would be an added advantage Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A3039315

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position: Quantitative Engineer / Analyst Location: Hyderabad, India Type: Full-Time (Onsite | Immediate joiners Required | F2F interview mandatory) Role Overview We are seeking a skilled and hands-on Quantitative Engineer / Analyst to join our team in Hyderabad. This role involves complete ownership of the strategy development lifecycle—from ideation and data engineering to backtesting, deployment, and live trading. You will work closely with a small, agile team to design and implement systematic trading strategies and infrastructure. Key Responsibilities Strategy Development & Research Develop and validate alpha-generating ideas (momentum, mean-reversion, statistical arbitrage, alternative data). Conduct large-scale backtesting; analyze PnL, turnover, risk metrics, and capital constraints. Software Engineering Write high-quality production code using Python, C++, or Java. Develop and maintain data pipelines for ingesting and cleaning market and alternative data. Build robust APIs and shared libraries for research and production use. Infrastructure & Deployment Containerize applications using Docker and automate deployments with CI/CD pipelines (GitLab CI, Jenkins). Collaborate on infrastructure design using Kubernetes and cloud platforms to ensure low-latency, high-availability systems. Live Trading & Monitoring Integrate trading strategies with broker APIs (e.g., IBKR, FIX) or internal gateways. Configure execution schedules, risk parameters, and real-time monitoring dashboards. Perform post-trade analysis and iteratively refine strategies. Collaboration & Documentation Work with portfolio managers, risk teams, and other quants to align strategy and risk objectives. Document code, models, and infrastructure; contribute to code reviews and mentor junior team members. Required Qualifications Bachelor’s or Master’s degree in a quantitative field (Mathematics, Statistics, Computer Science, Engineering, Physics, or Finance). Minimum 2 years of experience in production-grade software development using Python and/or C++. Experience in quantitative strategy design and validation. Solid foundation in statistical modeling, time-series analysis, and machine learning techniques. Familiarity with backtesting frameworks (e.g., Zipline, Backtrader, or custom engines). Strong skills in data handling and processing: SQL databases (PostgreSQL, TimescaleDB), NoSQL, and Pandas. Proficiency with Docker, Kubernetes, CI/CD pipelines, and cloud platforms (AWS, GCP, or Azure). Preferred Qualifications Experience with low-latency execution systems or FPGA-based infrastructure. Exposure to alternative data (e.g., social sentiment, news, satellite imagery). Knowledge of options pricing, risk analytics, or portfolio optimization. Experience working in live trading environments. Familiarity with FIX protocol and market data handlers. What We Offer Full ownership of strategy development and deployment. Fast-paced, collaborative environment with rapid feedback cycles. Access to top-tier research tools, data sources, and computing infrastructure. Competitive compensation with performance-linked bonuses.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description About Citi Citi, the leading global bank, has approximately 200 million customer accounts and does business in more than 160 countries and jurisdictions. Citi provides consumers, corporations, governments and institutions with a broad range of financial products and services, including consumer banking and credit, corporate and investment banking, securities brokerage, transaction services, and wealth management. Our core activities are safeguarding assets, lending money, making payments and accessing the capital markets on behalf of our clients. Diversity is a key business imperative and a source of strength at Citi. Being the best for our clients requires a culture of inclusion; an environment of equity, respect, and opportunity for everyone. Teams with varied backgrounds and experiences bring different perspectives to the conversation, enhance decision-making, and improve overall business performance. Citi has made it a priority to foster a culture where the best people want to work, where individuals are promoted based on merit, where we value and demand respect for others and where opportunities to develop are widely available to all. Fixed Income ETrading Tech Overview The evolution of electronic trading and automation has changed the way that rates products trade forever; driving a need for real-time, low latency pricing, market making and risk technology. In this increasingly electronic and competitive landscape, Citi is key player due to its leading eTrading platform and investment in technology. The FI eTrading team is at the forefront, by building high-performance low latency technology that supports the execution of billions of dollars of client trades every day. Our competitive advantage is our technology and having a platform that provides exceptional and dependable trading experience. If you have this kind of vision, capable of seeing ahead, of developing a clear path forward in a quest to try the as yet untried, here is the opportunity. Job Purpose: We are looking for a talented and passionate individual to join our Java Server development team and continue to evolve our next-generation trading application. The successful candidate will gain valuable exposure to the Electronic Trading business and an opportunity to work on a large scale, modern technology platform with a global presence. The team works closely with end users gaining direct exposure to the fast paced world of front office trading and finance. Responsibilities: Understanding of good design principles and ability to adhere to complex design Development of common, reusable components and services utilizing Citi’s best practices Responsible for creating high performance, low latency applications leveraging existing Citi framework Ensuring strong reliability, scalability and performance of our components Apply an engineering mind-set to development work: understand use-cases in details, develop metrics to build good estimates of volume and compute velocity requirements, understand and discuss openly any implementation limitations or workaround Contribute actively to system design decisions Evaluate and build POCs for new strategic initiatives and work to convert to industrial level solutions Provide post release assistance to business, development and support groups Develop application as per best practice and remain compliant with prescribed best practices (TDD, maintain high unit test coverage, CI…) Assisting in third line support during core trading hours Qualifications: Required: 8+ years of strong hands-on development experience using Java including expertise with Spring or another dependency injection framework 5+ years’ experience in developing and maintaining highly scalable, real-time, low latency, high-volume, scalable microservices Experience with real-time messaging middleware (Kafka, RabbitMQ, Solace, Tibco, …) Experience working with multi-threaded applications Strong software development fundamentals, data structures, design patterns, Object-Oriented programming, architecture, algorithms, and problem-solving skills Application deployment and debugging of applications on UNIX/LINUX Nice to Have: Understanding of capital markets and financial derivatives (rates or other) Experience with system performance tuning and low latency Java programming Hands-on experience in database technologies, including RDBMS (Oracle, …) and No SQL (MongoDB) Experience with In-Memory Datastore/Cache libraries (Redis, Apache Ignite, Gemfire, …) Experience with CI/CD pipelines Test-driven development, including Unit and End-to-End Testing. Competencies: Strong verbal and written communication skills; ability to face off to business users Self-motivated individual and with determination to achieve goals Willingness to learn, both technically and professionally Strong analytical and problem solving skills Good team working skills and ability to work in a distributed global team environment Ability to work on a fast-pace environment; Flexible and able to deliver quality results in the required timeframe Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 2 weeks ago

Apply

25.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Tower Research Capital is a leading quantitative trading firm founded in 1998. Tower has built its business on a high-performance platform and independent trading teams. We have a 25+ year track record of innovation and a reputation for discovering unique market opportunities. Tower is home to some of the world’s best systematic trading and engineering talent. We empower portfolio managers to build their teams and strategies independently while providing the economies of scale that come from a large, global organization. Engineers thrive at Tower while developing electronic trading infrastructure at a world class level. Our engineers solve challenging problems in the realms of low-latency programming, FPGA technology, hardware acceleration and machine learning. Our ongoing investment in top engineering talent and technology ensures our platform remains unmatched in terms of functionality, scalability and performance. At Tower, every employee plays a role in our success. Our Business Support teams are essential to building and maintaining the platform that powers everything we do — combining market access, data, compute, and research infrastructure with risk management, compliance, and a full suite of business services. Our Business Support teams enable our trading and engineering teams to perform at their best. At Tower, employees will find a stimulating, results-oriented environment where highly intelligent and motivated colleagues inspire each other to reach their greatest potential. As part of the Global Cybersecurity team, individual(s) will work to continually improve the security posture and service by monitoring, identifying and correcting security gaps and countermeasures. Location: Gurgaon, India Team: Global Security Operations Shift Timing: 6:00 AM IST – 3:00 PM IST with rotational weekend support as part of 24x7 operations Responsibilities Monitoring alerts for potential security incidents and requests for information. This includes, but not limited to monitoring of real-time channels, tools, dashboards, periodic reports, chat sessions, and tickets. Following incident-specific procedures to perform basic triage of said potential security incidents to determine their nature and priority and eliminate obvious false positives and process requests for information. Investigate and validate alerts to determine scope, impact, and root cause using available telemetry and threat intelligence. Escalate confirmed incidents with comprehensive evidence, impact assessment, and recommended containment/remediation actions. Coordinating with stakeholders with supporting third party security service providers to triage alerts, events or incidents. Monitoring and analyzing Security Information and Event Management (SIEM) to identify security issues for remediation. Write detection content, correlation rules, and queries in SIEM platforms to improve threat detection capabilities. Contribute to incident response playbooks, runbooks, and process improvements. Participate in threat hunting activities, adversary emulation exercises, and purple teaming efforts. Maintain accurate and detailed documentation of investigations, incidents, and actions in ticketing systems. Stay informed of current threat landscape, attacker tactics (MITRE ATT&CK), and vulnerabilities relevant to Tower’s environment. Interfacing with a variety of customers/users in a polite, positive, and professional manner. Requirements Bachelor’s Degree in Computer Science / Information Security / Information Technology 3+ years of hands-on experience in a Security Operations Center (SOC) or threat detection/incident response role in a mid to large-scale organization. Proven track record and experience of the following in a highly complex and global organization: Performing triage of potential security incidents Experience with the technologies including, but not limited to SIEM, EDR/NDR/XDR, Web proxies, Vulnerability assessment tool,IDS/IPS, Network/Host based firewalls, data leakage prevention (DLP). Solid understanding of: Linux OS, Windows OS and MAC OS TCP/IP, DNS, HTTP/HTTPS, and other common network protocols Malware behavior and attacker techniques (MITRE ATT&CK) Common attack vectors including phishing, malware, lateral movement, data exfiltration Early shift to provide round the clock support along with alternating weekend shift Soft Skills & Work Traits Strong analytical, investigative, and troubleshooting skills. Effective written and verbal communication skills; able to translate complex security issues into actionable guidance. Organized, detail-oriented, and capable of managing multiple priorities under pressure. Passionate about security, continuous learning, and operational excellence. Comfortable working in a rotating shift model including weekend support as needed. A strong desire to understand the what / why / how of security incidents. Benefits: Tower’s headquarters are in the historic Equitable Building, right in the heart of NYC’s Financial District and our impact is global, with over a dozen offices around the world. At Tower, we believe work should be both challenging and enjoyable. That is why we foster a culture where smart, driven people thrive – without the egos. Our open concept workplace, casual dress code, and well-stocked kitchens reflect the value we place on a friendly, collaborative environment where everyone is respected, and great ideas win. Our benefits include: Generous paid time off policies Savings plans and other financial wellness tools available in each region Hybrid working opportunities Free breakfast, lunch and snacks daily In-office wellness experiences and reimbursement for select wellness expenses (e.g., gym, personal training and more) Volunteer opportunities and charitable giving Social events, happy hours, treats and celebrations throughout the year Workshops and continuous learning opportunities At Tower, you’ll find a collaborative and welcoming culture, a diverse team and a workplace that values both performance and enjoyment. No unnecessary hierarchy. No ego. Just great people doing great work – together. Tower Research Capital is an equal opportunity employer.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Role - MLOps Location - Remote Experience - 5 + Yrs Responsibilities: ● Build and optimize model serving infrastructure with a focus on inference latency and cost optimization ● Architect efficient inference pipelines that balance latency, throughput, and cost across various acceleration options ● Develop monitoring and observability solutions for ML systems ● Collaborate with ML Engineers to establish best practices for optimized model deployment ● Implement cost-efficient, enterprise-scale solutions ● Collaborate in a cross-functional, distributed team for continuous system improvement ● Work with MLEs, QA Engineers, and DevOps Engineers ● Evaluate and implement new technologies and tools ● Contribute to architectural decisions for distributed ML systems Experience and Qualifications: ● 5+ years of experience in software engineering with Python ● Experience with ML frameworks, particularly PyTorch ● Experience optimizing ML models with hardware acceleration (AWS Neuron , ONNX, TensorRT) ● Experience with AWS ML services and hardware-accelerated instances (Sagemaker, Inferentia, Trainium) ● Proven experience building and operating AWS serverless architectures ● Deep understanding of event-driven processing patterns, SQS/SNS and serverless caching solutions ● Experience with containerization using Docker and orchestration tools ● Strong knowledge of RESTful API design and implementation ● Proficiency in writing good quality & secure code and be familiar with static code analysis tools ● Excellent analytical, conceptual and communication skills in spoken and written English ● Experience applying Computer Science fundamentals in algorithm design, problem solving, and complexity analysis

Posted 2 weeks ago

Apply

3.0 years

10 - 12 Lacs

India

On-site

About The Role We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python , and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization. Key Responsibilities Data Modelling & Visualization Build scalable and high-quality data models in Power BI using best practices. Define relationships, hierarchies, and measures to support effective storytelling. Ensure dashboards meet standards in accuracy, visualization principles, and timelines. Data Transformation & ETL Perform advanced data transformation using Power Query (M Language) beyond UI-based steps. Design and optimize ETL pipelines using SQL, Python, and Big Data tools. Manage and process large-scale datasets from various sources and formats. Business Problem Translation Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions. Decompose business questions into testable hypotheses and identify relevant datasets for validation. Performance & Troubleshooting Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability. Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs. Analytical Storytelling Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics. Develop data narratives that drive business impact. Solution Design Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility. Required Skills & Experience Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role. Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices. Hands-on with Python and SQL for data analysis, automation, and backend data transformation. Deep understanding of data storytelling, visual best practices, and dashboard performance tuning. Familiarity with DAX Studio and Tabular Editor. Experience in handling high-volume data in production environments. Preferred (Good To Have) Exposure to Big Data technologies such as: PySpark Hadoop Hive / HDFS Spark Streaming (optional but preferred) Why Join Us? Work with a team that's passionate about data innovation. Exposure to modern data stack and tools. Flat structure and collaborative culture. Opportunity to influence data strategy and architecture decisions. Skills: data modeling,big data,pyspark,power bi,data storytelling,spark streaming,etl,sql,tabular editor,hive,power query,hadoop,python,data transformation,dax studio,dax

Posted 2 weeks ago

Apply

0 years

7 - 9 Lacs

India

On-site

About Venanalytics At Venanalytics , we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data. Role Overview We’re looking for a Power BI Data Engineer who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL.. Key Responsibilities Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles. Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis. Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals. Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation. Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements. Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs. Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions. Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs. Must-Have Skills Strong experience building robust data models in Power BI Hands-on expertise with DAX (complex measures and calculated columns) Proficiency in M Language (Power Query) beyond drag-and-drop UI Clear understanding of data visualization best practices (less fluff, more insight) Solid grasp of SQL and Python for data processing Strong analytical thinking and ability to craft compelling data stories Good-to-Have (Bonus Points) Experience using DAX Studio and Tabular Editor Prior work in a high-volume data processing production environment Exposure to modern CI/CD practices or version control with BI tools Why Join Venanalytics? Be part of a fast-growing startup that puts data at the heart of every decision. Opportunity to work on high-impact, real-world business challenges. Collaborative, transparent, and learning-oriented work environment. Flexible work culture and focus on career development. Skills: data modeling,python,analytical thinking,data visualization,power bi,sql,power query,dashboards,dax

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Experience: 2 to 8 years' experience PlusWealth Capital Management LLP is a proprietary high-frequency trading firm, active in multiple markets including cash equities, options, and futures. We thrive on building cutting- edge, data-driven, and tech-based trading algorithms. We are currently seeking a skilled C++ Developer to join our dynamic team and contribute to the development and optimization of our trading systems. Responsibilities: 1. Raw Sockets: - Develop and maintain software components that use raw sockets for packet capture and analysis. - Optimize raw socket performance to minimize packet loss and latency. 2. PCAP Analysis: - Implement solutions for capturing and analyzing network traffic using PCAP. - Utilize tools like libpcap and Tcpdump to filter and process specific packet types. 3. Multithreading: - Design and implement multithreaded applications to enhance system performance and scalability. - Manage thread lifecycle and synchronization to ensure efficient parallel processing. 4. Cross Thread Safety: - Ensure thread safety in shared data structures using synchronization mechanisms. - Implement best practices for cross-thread communication and data sharing. 5. Memory Management: - Efficiently manage dynamic memory allocation and deallocation. - Utilize smart pointers and other C++ techniques to optimize memory usage and minimize fragmentation. 5. Cache Coherency: - Write cache-friendly code to optimize performance in multicore systems. - Implement techniques to reduce cache misses and false sharing. 6. Custom Memory Pool Programming: - Develop custom memory pools to enhance memory management efficiency. - Integrate custom memory pools with existing code and third-party libraries. 7. Motherboard and CPU Architecture: - Understand and leverage the key components of motherboard and CPU architecture to optimize system performance. - Utilize advanced CPU features like SIMD, multithreading, and out-of-order execution. 8. CPU Flags and Optimization: - Optimize software performance using CPU flags such as SSE, AVX, and FMA. - Profile and benchmark code to measure and improve performance based on CPU-specific instructions. Qualification Criteria Requirements: - A bachelor's or master's degree in computer science or relevant field -Proven experience or worked on projects in C++ development only, preferably in a low-latency or HFT environment is a plus. - Proficiency in multithreading and synchronization mechanisms in C++. - Expertise in memory management, including the use of smart pointers and custom memory pools is a plus. - Knowledge of cache coherency and techniques to optimize cache performance. - Familiarity with motherboard and CPU architecture, and how it impacts system performance. - Experience with CPU flags and their usage in software optimization. - Strong problem-solving skills and the ability to work in a fast-paced, high-pressure environment. - Excellent communication skills and the ability to work collaboratively in a team. Preferred Qualifications: - Experience with high-frequency trading systems and financial markets is a plus. - Knowledge of network protocols and performance optimization techniques. - Familiarity with profiling and benchmarking tools.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies