The Software Engineer - SDET is responsible for delivering high-quality software through advanced automated testing strategies. This role includes overseeing the creation and execution of robust testing frameworks, working closely with development teams to integrate testing practices seamlessly into the software development lifecycle, and leading efforts to enhance and optimize testing processes and methodologies. This role also drives innovation in testing tools and techniques, ensures that testing standards are met, and contributes to the overall quality assurance strategy of the organization. Responsibilities Design, develop, and implement scalable test automation frameworks and tools that align with the organization's testing requirements and goals. Work with engineers and product managers to design test plans and scripts for verifying software functionality, performance, and security. Stay current with industry trends, emerging technologies, and best practices, and recommend tools to enhance testing and product quality. Monitor test results, analyze defect trends, and provide actionable insights for product improvements. Drive continuous improvement in the testing processes, tools, and methodologies to enhance software quality and development efficiency. Document test cases, scripts, results, and metrics on coverage, defects, and performance. Requirements Bachelor's degree in Computer Science, Engineering, or a related field with 4-6 years of experience in software development and test automation. Proficiency in programming languages (e. g., JavaScript, Python, and SQL). Experience with test automation tools (e. g., Selenium, Cypress, JUnit, TestNG, pytest) and CI/CD tools - Jenkins, Bitbucket, Github, CircleCI. Strong understanding of software development methodologies (e. g., Agile, Scrum). Familiarity with performance testing and security testing tools. Exceptional problem-solving and analytical skills, combined with strong leadership and mentoring abilities. This job was posted by Sanoop Kannoli from Enlyft. Show more Show less
As a key member of our data platform team, you'll be tasked with the development of our next-gen cutting-edge data platform. Your responsibilities will include building robust data pipelines for data acquisition, processing, and implementing optimized data models, creating APIs and data products to support our machine learning models, insights engine, and customer-facing applications. Additionally, you'll harness the power of GenAI throughout the data platform lifecycle, while maintaining a strong focus on data governance to uphold timely data availability with high accuracy. Requirements Bachelor's degree or higher in Computer Science, Engineering, or related field with 5+ years of experience in data engineering with a strong focus on designing and building scalable data platforms and products. Proven expertise in data modeling, ETL/ELT processes, and data warehousing with distributed computing - Hadoop, Spark, and Kafka. Proficient in programming languages such as Python/PySpark Experience with cloud services such as AWS, Azure, or GCP and related services (S3 Redshift, BigQuery, Dataflow). Strong understanding of SQL / NoSQL databases (e. g. PostGres, MySQL, Cassandra). Proven expertise in data quality checks to ensure data accuracy, completeness, consistency, and timeliness. Excellent problem-solving in a fast-paced, collaborative environment, coupled with strong communication for effective interaction with tech and non-tech stakeholders. This job was posted by Sanoop Kannoli from Enlyft.
As a data engineer at Enlyft, you will play a vital role in our data platform team, contributing to the development of our cutting-edge data platform. Your main responsibilities will involve constructing robust data pipelines for data acquisition, processing, and implementing optimized data models. You will also be tasked with creating APIs and data products to support our machine learning models, insights engine, and customer-facing applications. Throughout the data platform lifecycle, you will leverage the power of GenAI while ensuring a strong focus on data governance to maintain timely data availability with high accuracy. To excel in this role, we are seeking individuals with a Bachelor's degree or higher in Computer Science, Engineering, or a related field, along with at least 7 years of experience in data engineering. Your expertise should center around designing and constructing scalable data platforms and products. Proficiency in data modeling, ETL/ELT processes, data warehousing using tools like Hadoop, Spark, and Kafka is essential. Moreover, familiarity with programming languages such as Python, Java, and SQL will be crucial. Experience with cloud platforms like AWS, Azure, or GCP and related services (S3, Redshift, BigQuery, Dataflow) is highly desirable. A strong understanding of SQL and NoSQL databases (e.g., PostGres, MySQL, Cassandra) is required, along with proven skills in data quality checks to ensure accuracy, completeness, consistency, and timeliness of data. We are looking for problem-solvers who thrive in a fast-paced, collaborative environment and possess excellent communication skills for effective engagement with both technical and non-technical stakeholders. Joining Enlyft means becoming part of a culture that prioritizes customer satisfaction, transparency, and a pursuit of excellence. You will be working alongside a top-notch team of colleagues who are dedicated to helping you learn and grow within a collaborative environment.,
About Enlyft Data and AI are at the core of the Enlyft platform. We are looking for creative, customer and detail-obsessed machine learning engineers who can contribute to our strong engineering culture. Our big data engine indexes billions of structured / unstructured documents and leverages data science to accurately infer the footprint of thousands of technologies and products across millions of businesses worldwide. The complex and evolving relationships between products and companies form a technological graph that is core to our predictive modeling solutions. Our machine learning based models work by combining data from our customer's CRM with our proprietary technological graph and firmographic data, and reliably predict an account's propensity to buy. About the Role As part of our team, you'll be tasked with handling substantial datasets to develop machine learning models catering to our enterprise clients. Your role will also involve contributing to the development of foundational models for our product. To excel in this position, you should possess a strong analytical aptitude, with a deep understanding of data analysis, mathematics, and statistics. Critical thinking and problem-solving abilities are imperative for the interpretation of data. Furthermore, we value a genuine enthusiasm for machine learning and a commitment to research. Responsibilities: Develop and Deploy predictive models in production and conduct advanced analytics, data mining, and data visualization to influence strategic decisions Architect and build data models to transform data into insights at scale Evaluate model performance and conduct iterative model training to maximize predictive and forecast accuracy on an on-going basis. Stay up to date with the latest advancements in AI/ML and GenAI, and drive innovation by evaluating new methodologies, architectures, and tools. Optimize data infrastructure and implement MLOps best practices to automate training, monitoring, and deployment of ML models. Collaborate with engineering teams to integrate AI models into production systems, ensuring scalability, security, and operational efficiency. Requirements: Bachelor or above degree in Computer Science, Applied Mathematics, Statistics, Econometrics, or related field 10+ years of industry experience in data science, machine learning, & advanced analytics, with a proven track record of leading impactful projects. Exceptional analytical and problem-solving skills, with the ability to break down complex business challenges into data-driven solutions. Deep expertise in statistical modeling, machine learning, and predictive analytics, with hands-on experience deploying models at scale. Strong programming skills in Python (or similar languages) and extensive experience with libraries such as pandas, numpy, scipy, and scikit-learn Experience with big data frameworks like Spark and Databricks, including optimization for large-scale data processing. Proficiency in working with large, high-dimensional datasets, integrating multiple data sources, and deriving meaningful insights. Expertise in Generative AI (GenAI), including LLMs, transformer architectures (GPT, BERT, T5, etc.), and diffusion models (Good to have) Experience with cloud platforms (AWS, Azure, or GCP) and working knowledge of ML model deployment (MLOps) and AI/ML lifecycle management Proficiency in databases, including SQL (MySQL, SQL Server, PostgreSQL), NoSQL (MongoDB, Cassandra), or data warehouses (Snowflake, BigQuery, Redshift). Strong leadership skills, with experience mentoring junior data scientists and collaborating cross-functionally with product, engineering, and business teams. Ability to communicate technical concepts to non-technical stakeholders and drive data-informed decision-making across the organization