Location:
Experience Level:
About sftwtrs.ai
sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains.
Role Overview
Research Engineer I
Key Responsibilities
Research & Prototyping
- Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts).
- Rapidly prototype novel model architectures, training schemes, and evaluation pipelines.
- Design experiments, run benchmarks, and analyze results to validate research hypotheses.
Software Development & Integration
- Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes).
- Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.).
- Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving.
Collaboration & Documentation
- Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements.
- Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications.
- Participate in regular code reviews, design discussions, and sprint planning sessions.
Model Deployment & Monitoring
- Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack).
- Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics.
- Troubleshoot deployment issues, optimize inference pipelines for latency and throughput.
Continuous Learning & Contribution
- Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions.
- Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit).
- Mentor interns or junior engineers on machine learning best practices and coding standards.
Qualifications
Education:
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field.
Research Experience:
- 1–3 years of hands-on experience in AI/ML research or equivalent internships.
- Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training).
- Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus.
Development Skills:
- Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch).
- Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant).
- Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs.
Software Engineering Practices:
- Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker).
- Comfortable writing unit tests (pytest or unittest) and conducting code reviews.
- Understanding of cloud services (AWS, GCP, or Azure) for training and serving models.
Analytical & Collaborative Skills:
- Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines.
- Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences.
- Demonstrated ability to collaborate effectively in a small, agile team.
Preferred Skills (Not Mandatory)
- Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended).
- Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings).
- Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices.
- Exposure to Rust or Go for high-performance inference code.
- Contributions to open-source AI or security automation projects.
Why Join Us?
Cutting-Edge Research & Production Impact:
Work on adversarial ML and security–automation projects that go from concept to real-world deployment.Hands-On Mentorship:
Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering.Innovative Environment:
Join a lean, highly specialized team where your contributions are immediately visible and valued.Professional Growth:
Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development.Competitive Compensation & Benefits:
Attractive salary, health insurance, and opportunities for performance-based bonuses.
How to Apply
careers@sftwtrs.ai
sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.