Job
Description
About Credit Saison India Established in 2019, Credit Saison India is one of the country's fastest-growing Non-Bank Financial Company (NBFC) lenders. With verticals in wholesale, direct lending, and tech-enabled partnerships with Non-Bank Financial Companies (NBFCs) and fintechs, Credit Saison India's tech-enabled model coupled with underwriting capability facilitates lending at scale. This helps in meeting India's significant credit gap, especially within underserved and underpenetrated segments of the population. Committed to growing as a lender and evolving its offerings in India for the long term, Credit Saison India caters to MSMEs, households, individuals, and more. Registered with the Reserve Bank of India (RBI) and boasting an AAA rating from CRISIL (a subsidiary of S&P Global) and CARE Ratings, Credit Saison India currently operates through a branch network of 45 physical offices. It manages 1.2 million active loans, an AUM of over US$1.5B, and employs about 1,000 people. Credit Saison India is part of Saison International, a global financial company with a mission to bring people, partners, and technology together to create resilient and innovative financial solutions for positive impact. With over 1,000 employees working across Saison's global operations based in Singapore, India, Indonesia, Thailand, Vietnam, Mexico, and Brazil, Credit Saison India is dedicated to being a transformative partner in enabling the dreams of people. Roles & Responsibilities: - Promote DataOps approach to Data science, engineering, and analytics delivery processes for automating data provision, testing, monitoring, and shortening CI/CD. - Collaborate with data & ML leads to create and build optimal data pipeline architecture for data solutions, including data science products. - Ensure scalable and performant data pipelines, while creating and maintaining services to connect data products. - Develop dashboards and tools for efficient monitoring of data and ML infrastructure, pipelines, ETL, and analytics delivery processes. - Build end-to-end event instrumentation and alerting systems to detect any anomalies in the system or data. - Assist in managing data and ML infrastructure, including upgrading, monitoring, and optimizing. - Collaborate with IT DevOps engineers and participate in enterprise DevOps activities. - Share knowledge on infra and data standards with other developers and be an active part of the tech community, promoting engineering best practices. - Contribute to innovative POCs with data & engineering teams and remain flexible towards technology approaches to leverage new technologies effectively. Required Skills & Qualifications: - Strong problem-solving skills, clear communication, and a positive contribution to a DevOps/DataOps culture. - Knowledge of the latest DevOps tools and practices. - Experience with data pipelines within AWS (Glue, DataPipeline, Athena, EMR, DMS, Spark) and databases like Aurora, MySQL, MariaDB, etc. - Proficient in building CI/CD pipelines for containerized Java/Python codestack. - Comfortable with Git workflow and experience with applications deployed in AWS. - Familiarity with configuration management and provisioning tools (e.g., Ansible, CloudFormation, Terraform) and scripting languages like Bash/Python/JavaScript. - Knowledge of orchestration/containerization using Docker and Kubernetes, with basic understanding of data science & ML engineering. - Bachelor's Degree in computer science or similar field, or Big Data Background from top-tier universities. - Experience: 6 - 10 years of relevant experience.,