Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role The Simulator team is responsible for a core internal tool that is used by many teams throughout the company to ensure the success of the next generation Cerebras WSE. The WSE is composed of an array of homogenous tiles. Each tile in composed of a compute element, that runs independent code and has access to its own memory, and a router connecting the compute element to the four neighboring tiles. The core of the simulator is a cycle accurate implementation of the tile. In this mode the simulator is used for design verification work, ensuring the quality of the ASIC design and the simulator implementation. The simulator also enables an array of tile to be combined into a 2D array. In this mode the simulator is used to develop kernel algorithms, where many tiles work together to implement a distributed operation, such as matrix multiplication, or an entire neural network, such as GPT-3 . Responsibilities Develop cycle accurate software simulators using C, C++ and Python to simulate precise system behavior of the Cerebras hardware. Enhance the simulator to extend to multiple architecture generations of the underlying wafer scale engine. Simulate and validate design verification coverage stimulus to build in functional and timing correctness into both the simulator and RTL design. Extend the simulator using advanced distributed compute frameworks like MPI and OpenMP to scale the simulator to a cluster of machines. Develop the fabric and interface solution for the architecture simulator which is used as a primary development platform for software development. Profile, debug and tune the simulator software for underlying micro architectures to help scale the simulations to cover the entire extent of the wafer scale engine. Develop and optimize infrastructure to interface the simulator with Cerebras test infrastructure and provide API s in Python to bridge the model to the PyTorch framework. Optimize the threading model and algorithms inherent to the simulator. Skills And Qualifications Bachelor s or masters degree in computer science or related field, or equivalent practical experience. Programming in C, C++ and Python. Data structures and algorithms. Demonstrated knowledge of computer architecture and microarchitecture. Software Development using Verilog or VHDL. Verification of microarchitecture designs using the Universal Verification Methodology (UVM) framework. Strong problem solving and debugging skills. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking an experienced QA Engineer to join the Cerebras AI Inference organization. This role requires an independent hands-on self-starter with excellent teamwork and problem-solving abilities. The ideal candidate is proactive, adaptable, and able to thrive in a rapidly changing environment. Responsibilities Develop comprehensive test plans and hands-on code functional, integration, performance, and regression tests. Define the testing methodology, select proper tools, and drive automated test development. Design, build, and maintain scalable data curation and transformation pipelines for various tasks from coding to reasoning. Ensure data quality, reproducibility, and security across the pipeline lifecycle. Design, develop, and maintain cloud-based inference services (e.g AWS based custom Kubernetes deployments). Build automated CI/CD pipelines for deploying and updating models and packages. Set up monitoring, logging, and alerting for model health, latency, and drift using tools like Prometheus, Grafana, OpenTelemetry, or cloud-native observability stacks. Skills And Qualifications 3+ years of experience building QA systems for cloud-based SaaS and/or API services, preferably of AI systems. Strong fundamentals in Python & Software Architecture. Self-sufficiency, Problem Solving & Hacker Mindset. Foundational Understanding of Computer Systems (OS, Distributed). Experience developing test automation and implementing quality monitoring systems. Excellent verbal and written communication skills. Strong organizational skills, teamwork, and can-do attitude. Clear and detailed documentation skills. Experience working with geographically dispersed teams across time zones. Proficiency with Jira or similar tool. Location This role follows a hybrid schedule, requiring in-office presence 3 days per week. Please note, fully remote is not an option. Office locations: Bengaluru. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. Role Join our close-knit physical design team where you'll excel in synthesizing, placing, routing and integrating high speed designs. Experience the full spectrum of physical design and implementation, collaborating closely with the RTL team and integrating these blocks seamlessly into the full-chip architecture. Minimum Skills & Qualifications 10+ years of physical design, integration & physical verification experience. Strong knowledge of block level and full-chip physical verification methodology. Experience with the complete physical design flow. Skills in Design Compiler, Fusion Compiler, ICC2 or similar physical design tools Expert with ICV or Calibre tools resolving block and full-chip DRC and LVS issues. Expert with IR/EM analysis and resolution Good understanding of full chip floor planning and integration. Strong experience in full chip timing closure Demonstrated ability to work with RTL teams to optimize for physical design Ability to independently debug and resolve physical verification Issues BS or MS in Electrical Engineering Preferred Skills: Knowledge of Synopsys tool suite is a plus. Good scripting skills with languages like Tcl and Python. Ability to make flow enhancements. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking an exceptional Detection and Response Engineer to serve on the front lines, where you will build systems to detect threats, investigate incidents, and lead coordinated response across teams. The right candidate brings hands-on experience creating reliable detections, automating repetitive tasks, and turning investigation findings into durable improvements to our security program, with an interest in exploring AI-driven automation. Responsibilities Create and optimize detections, playbooks, and workflows to quickly identify and respond to potential incidents. Investigate security events and participate in incident response, including on-call responsibilities. Automate investigation and response workflows to reduce time to detect and remediate incidents. Build and maintain detection and response capabilities as code, applying modern software engineering rigor. Explore and apply emerging approaches, potentially leveraging AI, to strengthen our security posture. Document investigation and response procedures as clear runbooks for triage, escalation, and containment. Skills And Qualifications 3 5 years of experience in detection engineering, incident response, or security engineering. Strong proficiency in Python and query languages such as SQL, with the ability to write clean, maintainable, and testable code. Practical knowledge of detection and response across cloud, identity, and endpoint environments. Familiarity with attacker behaviors and the ability to translate them into durable detection logic. Strong fundamentals in operating systems, networking, and log analysis. Excellent written communication skills, with the ability to create clear documentation. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role As a Security Engineer at Cerebras Systems, you will help secure the world s largest AI compute and the fastest AI inference platform. In this role, you ll focus on product security, identifying and mitigating weaknesses in our software stack by building AI powered security tools, performing code reviews, conducting security testing and assessments. You will work at the intersection of cutting-edge AI and modern security engineering. Key Responsibilities Build AI-powered security tools and security pipelines leveraging Cerebras world-class inference to detect and remediate vulnerabilities across the software development lifecycle. Implement and maintain security best practices for cloud infrastructure, containerized environments, and Kubernetes platforms, ensuring secure design, deployment, CD/CI pipelines and operations. Partner with development and operation teams to provide security guidance, foster secure coding practices and drive a culture of security awareness across the organization. Conduct threat modeling and risk assessments to proactively identify risks and develop mitigation strategies. Track, analyze and manage third-party supply chain vulnerabilities across applications and infrastructure (on-prem and cloud) and support teams in timely remediation. Assist in security incident response, including investigation, analysis, resolution and documentation of application related incidents. Skills & Qualifications 5+ years of experience in Application Security, Product Security or Security Engineering roles. Bachelor /masters in computer science, Information Security or related field from a top-tier institution or proven security research track record (open-source security tool contributions or bug bounties). Proficiency in programming languages (Python, Go, C++ etc.) and system design. Deep understanding of Cybersecurity basics, including threat modeling, risk assessments and incident response. Experience with code security (SAST, DAST, Secure supply chain, IAC security etc). Knowledge of web security tools (e.g. Burp Suite, OWASP ZAP), familiarity with security protocols and applied cryptography. Strong written and verbal communication skills, with the ability to explain complex security issues to both technical and non-technical audiences. Assets Experience securing applications in high-speed startup environments. Exposure to AI/ML security, Adversarial AI or large-scale computational workload security. Exposure to OWASP Top 10, CIS Benchmarks or compliance frameworks. Top bug bounty rankings, CVEs or public Hall of Fame shout-outs. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. As an IT/DevOps engineer, you will be part of a team responsible for our engineering on-premises and cloud computing environments. You will develop and maintain the IT/DevOps infrastructure required to run our day-to-day operations and help develop and implement future strategic initiatives. A proven ability to automate day-to-day tasks is critical to the role. Responsibilities Design, deploy, and maintain infrastructure automation and configuration management. Implement and manage CI/CD pipelines and deployment processes. Monitor, troubleshoot, and optimize cloud and on-premises infrastructure. Develop and maintain Infrastructure as Code (IaC) using tools like Terraform and Ansible. Respond to general IT requests and infrastructure incidents. Ensure security best practices are implemented across all systems. Collaborate with engineering and development teams to evaluate and identify optimal cloud solutions. Modify and improve existing systems for scalability and performance. Develop and maintain cloud solutions in accordance with best practices. Monitor network infrastructure health and performance, with hands-on ability to configure and manage switches, VLANs, and firewalls as needed. Configure and manage Palo Alto firewalls and Panorama management platform. Design and implement comprehensive networking solutions spanning on-premises datacenters and cloud environments. Manage Kubernetes clusters and container orchestration with deep understanding of cloud-native networking. Troubleshoot complex network issues across hybrid cloud and datacenter environments. Skills & Qualifications Minimum 5+ years of experience. Masters Degree in Computer Science. Programming & Scripting: Python (required). PowerShell (nice to have). Operating Systems: Deep Linux expertise (RHEL/CentOS/Rocky) including system administration, performance tuning, and troubleshooting. Advanced knowledge of Linux networking, storage, and security configurations. Experience with Linux kernel optimization and system-level debugging. Ability to support mixed Mac/PC/Linux users. Automation & Infrastructure: Automation tools such as Ansible, Terraform, Packer. Jenkins management experience. Kubernetes (K8s) administration and deployment. Cloud & On-Premises Networking: Strong networking foundation required - from switch configuration and VLAN management to cloud-native networking. Experience with Palo Alto firewalls and Panorama management platform. Proficiency in configuring and managing network switches (Arista, Juniper), routing, and datacenter networking. Deep understanding of cloud networking concepts, hybrid connectivity, and multi-cloud architectures. AWS experience including VPC, Security Groups, EC2, EFS, ASG, Route 53, CloudFormation, Lambda, RDS. AWS Professional-level certification required. Ability to proactively monitor and troubleshoot complex network infrastructure. Monitoring & Security: Monitoring tools: ELK/Grafana/Zabbix. Familiarity with administration and deployment of endpoint security tools. Identity & Access Management: Office 365 Administration (Azure AD, SharePoint, OneDrive). Experience with Microsoft Azure AD/OKTA to enable SSO across SaaS applications. Development & Collaboration Tools: GitHub administration. Experience working in Atlassian Jira ServiceDesk/Confluence/Slack. Mobile Device Management: Experience with MDM (Intune). Security & Compliance: Deep understanding of security best practices and compliance frameworks. Experience implementing and maintaining security controls across cloud and on-premises environments. Cloud Financial Management: Experience with cloud cost optimization and budget management. Proficiency in cloud cost monitoring tools and implementing cost control strategies. ML/AI Infrastructure: Experience managing machine learning infrastructure and GPU-accelerated workloads. Knowledge of latest GPU hardware deployment and optimization for AI/ML applications. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking an exceptional Detection and Response Engineer to serve on the front lines, where you will build systems to detect threats, investigate incidents, and lead coordinated response across teams. The right candidate brings hands-on experience creating reliable detections, automating repetitive tasks, and turning investigation findings into durable improvements to our security program, with an interest in exploring AI-driven automation. Responsibilities Create and optimize detections, playbooks, and workflows to quickly identify and respond to potential incidents. Investigate security events and participate in incident response, including on-call responsibilities. Automate investigation and response workflows to reduce time to detect and remediate incidents. Build and maintain detection and response capabilities as code, applying modern software engineering rigor. Explore and apply emerging approaches, potentially leveraging AI, to strengthen our security posture. Document investigation and response procedures as clear runbooks for triage, escalation, and containment. Skills And Qualifications 3 5 years of experience in detection engineering, incident response, or security engineering. Strong proficiency in Python and query languages such as SQL, with the ability to write clean, maintainable, and testable code. Practical knowledge of detection and response across cloud, identity, and endpoint environments. Familiarity with attacker behaviors and the ability to translate them into durable detection logic. Strong fundamentals in operating systems, networking, and log analysis. Excellent written communication skills, with the ability to create clear documentation. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
Cerebras Systems builds the worlds largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. We are looking for a customer-oriented IT Support Specialist to join our growing IT team. In this role, you will be responsible for providing comprehensive technical support to our staff, resolving hardware and software issues promptly, and ensuring the smooth operation of our technology infrastructure. The ideal candidate will have a strong background in both Mac and PC environments (some Linux), excellent troubleshooting skills, and a passion for delivering exceptional IT service. Key Responsibilities Technical Support Provide first-level technical support for end users via phone, email, Atlassian Service Desk/Jira, Slack, and in-person. Troubleshoot and resolve issues related to Mac and PC laptops, desktops, peripherals, and mobile devices. Diagnose and solve software problems across various applications including Microsoft Office 365 suite. Support remote workers by configuring and troubleshooting VPN connections. Maintain and troubleshoot network connectivity issues including Cisco Meraki network devices. Respond to and manage support requests through Slack channels and direct messages. Review service dashboards as first level triage and handle appropriate escalation of issues. Employee Onboarding and Offboarding Lead and manage the comprehensive IT onboarding process for all new employees. Collaborate with HR and Recruiting teams to develop and maintain a first-class onboarding experience. Prepare and configure hardware (laptops, phones, peripherals) for new employees. Create and set up necessary accounts across all systems and platforms. Conduct IT orientation sessions for new employees. Document and optimize the IT onboarding workflow to ensure consistency and efficiency. Process offboarding for departing employees, ensuring proper data security protocols are followed. Maintain detailed inventory of IT assets assigned to employees. User Account Management Collect and securely store SSH keys for appropriate staff members. Configure and maintain VPN access for remote workers. Administer user accounts in Active Directory and Microsoft 365 environment. Manage user access to Atlassian products, GitHub repositories, and Dropbox shared folders. Ensure proper access levels and permissions across all platforms. Microsoft 365 Administration Manage Microsoft 365 tenant including user creation, licensing, and permissions. Configure and troubleshoot email accounts, distribution lists, and shared mailboxes. Provide support for Microsoft Teams, SharePoint, and other Microsoft 365 applications. Assist with Microsoft 365 security configurations and compliance requirements. Conference Room Management Set up, maintain, and troubleshoot video conferencing equipment. Ensure conference rooms are properly updated and functioning for meetings. Provide user training on conference room technology. Perform regular maintenance checks on conference room systems. Documentation & Process Improvement Create and maintain documentation for IT processes, procedures, and troubleshooting guides. Track all support requests through Atlassian Service Desk/Jira ticketing system. Manage and organize IT knowledge base in Atlassian Confluence. Create and update Jira tickets to track issues, tasks, and projects. Analyze recurring issues and recommend solutions to prevent future occurrences. Participate in IT projects and initiatives to improve infrastructure and user experience. Skills & Qualifications Education & Experience Bachelors degree in Computer Science, Information Technology, or related field (or equivalent experience). 3+ years of experience in an IT support or helpdesk role. Demonstrated experience supporting both Mac and Windows environments. Technical Skills Strong knowledge of Windows and macOS operating systems. Experience with Microsoft 365 administration (Exchange Online, Teams, SharePoint). Familiarity with network protocols and troubleshooting (TCP/IP, DNS, DHCP). Understanding of VPN setup and maintenance. Experience with SSH key management and secure access protocols. Knowledge of Active Directory and user account management. Experience with Cisco Meraki network equipment configuration and troubleshooting. Proficiency with Atlassian tools (Jira, Service Desk, Confluence). Experience with GitHub repository management and user access. Familiarity with Dropbox administration and sharing permissions. Experience using Slack for team communication and support. Ability to monitor and interpret service dashboards for system health. Familiarity with IT security best practices. Preferred Skills Familiarity with scripting and automation tools: PowerShell for Windows automation. Python for scripting and automation. Terraform for infrastructure as code. Ansible for configuration management. Go programming language. Experience with monitoring tools and service dashboards. Knowledge of cloud services (AWS, Azure, GCP). Understanding of CI/CD pipelines. Soft Skills Excellent customer service attitude with strong interpersonal skills. Clear communication abilities with both technical and non-technical users. Problem-solving mindset with attention to detail. Ability to prioritize tasks effectively in a fast-paced environment. Self-motivated with the ability to work independently when needed. Certifications (Preferred) CompTIA A+ Certification. Microsoft 365 Certified: Modern Desktop Administrator Associate. Apple Certified Support Professional (ACSP). ITIL Foundation Certification. Cisco Meraki Network Associate Certification. Atlassian Certified in Jira Service Management. Office Location: Toronto, ON. This role is primarily in-office, with an expectation of five days per week onsite. Limited hybrid flexibility may be considered based on team needs and performance. Please note that fully remote work is not an option. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. .
FIND ON MAP