Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 18.0 years
0 Lacs
karnataka
On-site
Samsung is a world leader in Memory, LCD and System LSI technologies and is currently seeking exceptional software and hardware talent to join the Samsung Indian Design Centre and the Advanced Computing Lab (ACL) in San Jose, CA. The Samsung Austin Research Center (SARC) in Austin, TX was established in 2010 as one of Samsung's strategic investments in high-performance low-power ARM-based device technology. The GPU design teams in Austin (SARC), San Jose (ACL), and India Bangalore are currently developing a GPU for deployment in Samsung mobile products. Additionally, the System IP team is focused on Coherent Interconnect and memory controller architectures. As a GPU Engineer, you will be part of a team responsible for designing and documenting major units in a GPU pipeline for Mobile graphics applications and potentially other related markets. This mid to senior level position involves working as an individual contributor to drive the functional and cycle simulators for the GPU pipeline. Collaboration with cross-functional teams including RTL design, modeling, and software on various sub-blocks of the end solution is essential. **Role and Responsibilities:** - Design and document major units in a GPU pipeline targeted at Mobile graphics and machine learning. - Develop functional and cycle simulators for the GPU pipeline, including collecting statistics for evaluating potential optimizations, prototyping to test functional correctness, and specifying detailed schemes for next-level hardware and/or software implementation. - Collaborate with implementation, modeling, and software teams to define and develop microarchitecture, software implementation, and/or a verification plan. - Investigate alternate approaches for important GPU workloads, incremental optimizations, and rebalancing to maximize performance in future key workloads. - Work with software developers to understand trends in future graphics and AI applications, addressing problems faced by application and middleware developers. - Find and/or implement applications to exercise novel algorithms in drivers/hardware. **Minimum Requirements:** - Experience using parallel programming. - Knowledgeable in GPU or other parallel processing architectures. - Strong knowledge of GPU architecture, primarily on Linux Stack or Kernel Mode Driver. - Knowledge of key mobile GPU graphics workloads and compute application workloads such as computer vision, image processing, AI, and Image compression. - Knowledge of game applications, game rendering engines, academic papers on advanced rendering techniques desirable. - Expertise in implementing advanced graphics rendering techniques, machine-learning (AI) approaches. - Proven ability to debug complex issues in multi-threaded environments. - Understanding of Operating System fundamentals and concepts. - Familiarity with offline and JIT compiler designs. - Background in Linux and Android development. - Strong C, C++, and Python programming experience or industrial experience in systems programming (driver development a strong plus). - Strong algorithmic background and outstanding problem-solving skills. - System-level performance analysis and strong OS fundamentals (memory management, multithreading/synchronization, user/kernel mode interaction). - Excellent C and C++ programming skills (assembly a plus). - Understanding of rasterization pipeline and modern GPU architectures. - Excellent communication and teamwork skills. - Ability to own a problem and drive it to completion. **Preferred Qualifications:** - Experience working with ARM 64-bit architecture. - Experience developing system software for Android OS. - Knowledge of high-level shading languages, e.g., GLSL/HLSL. - Understanding of modern real-time rendering game engines.,
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
If you're interested in computer graphics and working with leading graphics validation engineers on Intel's latest GPU architecture, then our GPU Hardware IP (GHI) has an opportunity for you. In this position, you will be playing a key role in the pre-silicon validation of Intel's leading-edge graphics IP. As a candidate, you will build emulation/fpga prototyping platforms, develop efficient verification methodologies, execute to the validation plan, and debug issues seen during validation. You will define, develop, and perform functional validation for GPUs, focusing on the interaction of GPUs, media, display, and system-level features. Using various hardware and software tools and techniques, you will ensure validation coverage and that performance, power, and area goals are met. Additionally, you will review proposed design changes to assess their impact on validation plans, tasks, and timelines. Collaborating with other engineers, you will develop GPU validation methodologies, execute validation plans, optimize designs, troubleshoot, and perform failure analysis. Silicon debug will be a part of your responsibilities to identify root causes and resolve functional and triage failures for GPU issues. Your tasks will involve testing interactions between various GPU features using validation infrastructure, developing pre-silicon validation infrastructure, and creating a test environment for validation testing. Furthermore, you will publish GPU validation reports summarizing all validation activities performed, review results, and communicate them to relevant teams. Additionally, you will work with architecture, design, verification, board, platform, and manufacturing teams to maintain and improve debug, validation test strategy, methodologies, and processes for graphics interfaces to meet desired product specifications. The ideal candidate will exhibit analytical skills and possess a Bachelor's degree in Electronics Engineering, Electrical Engineering, Computer Engineering, or related STEM degree with 3+ years of industry experience, or a Master's Degree in Electronics Engineering, Electrical Engineering, Computer Engineering, or related STEM degree with 1+ years of industry experience. Knowledge of Computer Architecture, Microarchitecture Fundamentals, functional testing, pre-silicon validation techniques, tools, simulation, debug/troubleshooting, Zebu emulation, fpga prototyping, acceleration platforms, software programming in C/C++, and automation/scripting are required qualifications. Preferred qualifications include a deep understanding of HW/FW flows, experience and hands-on skills with LTB, ITP, Logic Analyzers, Oscilloscopes, Scan dumps, and in-circuit Emulators. This is an Experienced Hire job type with Shift 1 (India) as the primary location in India, Bangalore, under the Client Computing Group (CCG) responsible for driving business strategy and product development for Intel's PC products and platforms.,
Posted 1 week ago
3.0 - 9.0 years
3 - 8 Lacs
Bengaluru, Karnataka, India
On-site
Role Responsibilities: Verify complex digital design blocks (e.g., GPU, CPU, Image processors) by analyzing design specifications and working with design engineers. Create and enhance constrained-random verification environments using SystemVerilog, UVM, or formal verification techniques with SystemVerilog Assertions (SVA). Write coverage measures for stimulus and corner cases, ensuring thorough testing of the design. Debug tests in collaboration with design engineers to ensure functional correctness and close coverage gaps before tape-out. Job Requirements: Bachelor's degree in Mechanical Engineering, Electrical Engineering, Industrial Engineering, or equivalent practical experience. 3 years of experience with standard GPU workloads like Manhattan/3DMark and knowledge of GPU architecture. Experience with AMBA Bus protocols like AHB/AXI/ACE. Experience in creating verification environments and debugging designs.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a senior engineer at NVIDIA, you will be at the forefront of groundbreaking developments in High-Performance Computing, Artificial Intelligence, and Visualization. Your role will involve understanding, analyzing, profiling, and optimizing deep learning workloads on cutting-edge hardware and software platforms. You will collaborate with cross-functional teams to enhance cloud application performance on diverse GPU architectures and identify bottlenecks for optimization. Your responsibilities will include building tools to automate workload analysis, optimization, and other critical workflows. You will drive platform optimization from hardware to application levels and design performance benchmarks to evaluate application efficiency. Your expertise in deep learning model architectures, Pytorch, and large-scale distributed training will be essential in proposing optimizations to enhance GPU utilization. To excel in this role, you should hold a Masters in CS, EE, or CSEE, or possess equivalent experience with at least 5 years in application performance engineering. Experience with large-scale multi-node GPU infrastructure, application profiling tools, and a deep understanding of computer architecture is required. Proficiency in Python and C/C++ for analyzing and optimizing application code is also crucial. Standing out from the crowd can be achieved through strong fundamentals in algorithms, GPU programming experience, and hands-on experience in performance optimization on distributed systems. An understanding of NVIDIA's server and software ecosystem, coupled with expertise in storage systems, Linux file systems, and RDMA networking will set you apart. Join NVIDIA, a leading technology company driving the AI revolution, and play a direct role in shaping the hardware and software roadmap while impacting deep learning users globally. If you are a creative and autonomous individual who is unafraid to push the boundaries of performance analysis and optimization, we invite you to be part of our innovative team. JR1986479,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a senior engineer at NVIDIA, you will play a crucial role in the optimization of deep learning workloads on cutting-edge hardware and software platforms. Your primary responsibility will be to understand, analyze, and profile these workloads to achieve peak performance. By building automated tools for workload analysis and optimization, you will contribute to enhancing the efficiency of GPU utilization and cloud application performance across diverse GPU architectures. Collaboration with cross-functional teams will be essential as you identify bottlenecks and inefficiencies in application code, proposing optimizations to drive end-to-end platform optimization. Your role will involve designing and implementing performance benchmarks and testing methodologies to evaluate application performance accurately. To qualify for this role, you should hold a Master's degree in CS, EE, or CSEE, or possess equivalent experience. With at least 5 years of experience in application performance engineering, you are expected to have a background in deep learning model architectures, proficiency in tools such as NVIDIA NSight and Intel VTune, and a deep understanding of computer architecture and GPU fundamentals. Proficiency in Python and C/C++ will be essential for analyzing and optimizing application code effectively. To stand out from the crowd, strong fundamentals in algorithms and GPU programming experience (CUDA or OpenCL) will be highly beneficial. Hands-on experience in performance optimization and benchmarking on large-scale distributed systems, familiarity with NVIDIA's server and software ecosystem, and expertise in storage systems, Linux file systems, and RDMA networking will further distinguish you as a top candidate. Joining NVIDIA means being part of a dynamic team that leads the AI revolution, offering you the opportunity to directly impact the hardware and software roadmap in a fast-growing technology company. If you are unafraid to tackle challenges across the hardware/software stack and are passionate about achieving peak performance in deep learning workloads, we want to hear from you.,
Posted 2 weeks ago
8.0 - 10.0 years
8 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Manager, Algorithm Engineering KLA is seeking a Manager, Algorithm Engineering to lead and mentor a team focused on designing and implementing robust and scalable algorithmic solutions. This pivotal role involves developing and maintaining the infrastructure for algorithm deployment at scale , optimizing performance, and driving continuous improvement in our systems. Responsibilities Lead and mentor a team of algorithm engineers , providing guidance and support to ensure their professional growth and success. Develop and maintain the infrastructure required for the deployment and execution of algorithms at scale. Collaborate with data scientists, software engineers, and product managers to design and implement robust and scalable algorithmic solutions. Optimize algorithm performance and resource utilization to meet business objectives. Stay up-to-date with the latest advancements in algorithm engineering and infrastructure technologies, and apply them to improve our systems. Drive continuous improvement in development processes, tools, and methodologies. Skills Proven experience in developing computer vision and image processing algorithms and ML/DL algorithms. Familiarity with high-performance computing, parallel programming, and distributed systems. Strong leadership and team management skills , with a track record of successfully leading engineering teams. Proficiency in programming languages such as Python, C++ and CUDA. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn) (Preferred). Experience with GPU architecture and algorithm development toolkits like Docker, Apptainer (Preferred). Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Performs functional logic verification of an integrated SoC to ensure design will meet specifications. Defines and develops scalable and reusable block, subsystem, and SoC verification plans, test benches, and the verification environment to meet the required level of coverage and confirm to microarchitecture specifications. Executes verification plans and defines and runs emulation and system simulation models to verify the design, analyze power and performance, and uncover bugs. Replicates, root causes, and debugs issues in the presilicon environment. Finds and implements corrective measures to resolve failing tests. Collaborates and communicates with SoC architects, microarchitects, full chip architects, RTL developers, postsilicon, and physical design teams to improve verification of complex architectural and microarchitectural features. Documents test plans and drives technical reviews of plans and proofs with design and architecture teams. Incorporates and executes security activities within test plans, including regression and debug tests, to ensure security coverage. Maintains and improves existing functional verification infrastructure and methodology. Absorbs learning from postsilicon on the quality of validation done during presilicon development, updates test plan for missing coverages, and proliferates to future products. Qualifications You must possess the below minimum qualifications to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.Minimum (must haves) Bachelor's degree in electrical engineering or computer engineering with 3 to 12 years of experience or a master's degree in electrical engineering or computer engineering. 6+ years of experience in 5 or more of the following: Test Bench bring-up at SoC and strong programming skills in System Verilog, OVM and UVM. Test Plan development experience. Enabling regressions, maintaining QoV (quality of validation) with good functional/code/other coverage metrics. Familiarity with both simulation and emulation environments. Strong CPU/GPU architecture understanding. RTL Debugging module level or soc level system simulation failures. Building emulation models, enabling content. Working with Validation Engineers and central CAD teams to support and maintain verification requirements in terms of Automation and tool flow support. Coordinating with Val team on CAD Requirement with support CAD, IT and Engineering Compute Teams. Act as focal point between design and tool vendors for issues and feature enhancements. Training/Supporting Validation Engineers in CAD tool flow and Infrastructure Monitoring and improve existing simulation environments and simulation efficiency. Experience with Performance Validation of GPUs and automation framework using Python is desirable
Posted 2 months ago
10 - 15 years
55 - 60 Lacs
Bengaluru
Work from Office
Exposure to Emulation model build and sanity bring-up Proven Emulation Testbench bring-up experience with C++ and SystemVerilog BFM models Expertise in porting C++/SystemVerilog Simulation Testbenches to Emulation Hands-on experience with SystemVerilog, C++ based testbenches BFM coding using C++ or SystemVerilog Strong Python scripting skills Exposure to GPU emulation model build and sanity bring-up Good understanding of GPU architecture Strong functional verification expertise
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough