Key Responsibilities: ? Designing and implementing security measures for Cloud platform ? Design and implement scalable processes to provision cloud access ? Identify and resolve security issues across the cloud infrastructure ? Evaluate and respond to security alerts/events triggered through SOC ? Identifying and mitigating security risks: assess vulnerabilities, identify potential threats, and implement solutions to reduce risks. ? Conducting security assessments and audits: ? Tune security tool configuration to minimize false positives ? Develop event response documentation and processes, including diagrams for system environments, cloud operations, and security tools ? Collaborate with security leadership, engineering, and compliance to execute security strategies ? Assist other teams in solving security issues in a manner that complies with business requirements and best practices ? Conducting security assessments and audits: Regularly evaluating the security posture of cloud systems to identify areas for improvement. ? Review our architecture and design through a security lens to provide actionable, timely requirements and recommendations ? Serve as a subject matter expert for security tools, applications, and processes Required Skills and Experience ? 5+ years of experience working with a public cloud infrastructure or information security role ? Preferred certification : Azure cloud security certification ? Strong understanding of cloud platforms: Knowledge of cloud providers like AWS, Azure, and Google Cloud ? Security concepts and practices: Expertise in areas like access control, cryptography, network security, and threat detection. ? Compliance and regulations: Knowledge of industry standards like ISO 27001, PCI DSS, HIPAA, and GDPR ? Scripting and automation: Ability to use scripting languages and infrastructure-as-code (IaC) techniques to automate security tasks. ? Experience deploying and customizing security tools such as vulnerability scanners, static analyzers, IDS/IPS, firewalls, and endpoint security monitoring ? Knowledge of networking and web protocols ? Knowledge of modern cloud technology components and deployment patterns ? Strong communication and collaboration skills ? Strong analytical problem solving skills : Ability to diagnose security issues, communicate findings effectively, and work with other teams to resolve problems. ? Ability to work independently and as part of a team. ? Ability to stay calm under pressure and make quick decisions. Education ? Bachelor's degree in computer science, information systems, Cybersecurity or Cloud Computing
Job Overview: We are seeking a Lead Data Engineer with deep expertise in Snowflake, dbt, and Apache Airflow to design, implement, and optimize scalable data solutions. This role involves working on complex datasets, building robust data pipelines, ensuring data quality, and collaborating closely with analytics and business teams to deliver actionable insights. If you are passionate about data architecture, ELT best practices, and modern cloud data stack, we'd like to meet you. Key Responsibilities: Pipeline Design & Orchestration: Build and maintain robust, scalable data pipelines using Apache Airflow, including incremental & full-load strategies, retries, and logging. Data Modelling & Transformation: Develop modular, tested, and documented transformations in dbt, ensuring scalability and maintainability. Snowflake Development: Design and maintain warehouse in Snowflake, optimize Snowflake schemas, implement performance tuning (clustering keys, warehouse scaling, materialized views), manage access control, and utilize streams & tasks for automation. Data Quality & Monitoring: Implement validation frameworks (null checks, type checks, threshold alerts) and automated testing for data integrity and reliability. Collaboration: Partner with analysts, data scientists, and business stakeholders to translate requirements into scalable technical solutions. Performance Optimization: Develop incremental and full-load strategies with continuous monitoring, retries, and logging and tune query performance and job execution efficiency. Infrastructure Automation: Use Terraform or similar IaC tools to provision and manage Snowflake, Airflow, and related environments Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field. 710 years of experience in data engineering, with strong hands-on expertise in: o Snowflake (data modelling, performance tuning, access control, streams & tasks, external tables) o Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) o dbt (modular SQL development, Jinja templating, testing, documentation) Proficiency in SQL and Python (Spark experience is a plus). Experience building and managing pipelines on AWS, GCP, or Azure. Strong understanding of data warehousing concepts and ELT best practices. Familiarity with version control (Git) and CI/CD workflows. Exposure to infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving, collaboration, and communication skills, with the ability to lead technical projects. Good to have: o Experience with streaming data pipelines (Kafka, Kinesis, Pub/Sub). o Exposure to BI/analytics tools (Looker, Tableau, Power BI). o Knowledge of data governance and security best practices
Job Overview: We are looking for a Lead AI/ML Engineer to develop, deploy, and optimize AI-driven solutions. This role requires expertise in machine learning, deep learning, and Generative AI, with a strong focus on implementing cutting-edge AI models for business applications. Key Responsibilities: Develop and deploy AI/ML models to solve complex business problems. Design and optimize Generative AI models for various applications. Perform data preprocessing, feature engineering, and model tuning to enhance accuracy. Collaborate with cross-functional teams to integrate AI solutions into existing workflows. Conduct research and stay updated on the latest advancements in AI, ML, and LLMs. Implement MLOps best practices to ensure scalable and efficient model deployment. Mentor junior engineers and contribute to knowledge sharing. Present technical findings and project results to stakeholders. Required Skills & Qualifications: Bachelor's or master's degree in computer science, AI, ML, or a related field. 5 to 8 years of hands-on experience in AI/ML development. Strong proficiency in Python, R, or Java, along with ML frameworks like TensorFlow, PyTorch, or Keras. Expertise in Generative AI, NLP, and deep learning techniques. Familiarity with cloud platforms (AWS, Azure, GCP) and containerization tools (Docker, Kubernetes). Experience with MLOps tools for model lifecycle management. Strong analytical and problem-solving skills. Excellent communication skills for collaboration with business teams. Preferred Skills: Experience with multi-agent systems and Generative AI design patterns. Understanding computer vision, reinforcement learning, and AI-driven automation. Contributions to AI research, publications, or open-source projects.