Role Brief: The ideal candidate will have a strong background in software engineering with a focus on DevOps practices and tools with extensive experience in shell scripting, applicationdeployment, and release management. Primary responsibility is to streamlining our development processes, ensuring smooth deployments, and managing the release lifecyclefor our flagship products. Role & Responsibilities: Implement and maintain Deployment pipelines to facilitate seamless code integration and delivery.Ensure consistency across development, testing, and production environments.Set up and manage monitoring tools to ensure system reliability and performance. Analyze logs for troubleshooting and performance tuning.Work closely with development, QA, and operations teams to ensure successful delivery and deployment of applications.Develop, maintain, and optimize shell scripts for automation of tasks and processes.Manage and execute deployment processes for Java-based applications, ensuring high availability and performance.Oversee the end-to-end release process, including planning, scheduling, and coordinating releases across multiple environments.Troubleshooting techniques and fixing the code bugsIdentifying and deploying cyber-security measures by continuously performing vulnerability assessment and risk managementIncidence management and root cause analysis Role Requirements:7+ years of experience as a DevOps Engineer or similar software engineering role.Proficient in shell scripting (Bash, etc.) with a strong understanding of automation tools and frameworks.Proven experience in deploying and managing Java-based applications both manually and automatically. Experience in Manual deployment is a must.Solid understanding of release management processes and best practices.Version Control: Proficiency with version control systems (Git, Bitbucket etc.).Must have a strong experience in handling the Nginx Configuration files and handling Nginx as a reverse proxy.Experience in handling SSL Certificates and DNS management.Must have a strong Experience with environment configuration and management.Experience in handling MySQL/MariaDb Queries.Good to have knowledge on handling the YAML Property files.Understanding of Containerization (Docker) and Orchestration (Kubernetes, EKS) techniques.Strong analytical and troubleshooting skills.Excellent verbal and written communication skills. Key Skillset :Linux, Bash Shell Scripting, Nginx, Java Microservices deployment, RabbitMQ Configuration and Setup, Server setup and configuration, Networking Concepts, AWS Resources like EC2,RDS, SSL Certificate Installation, Tomcat, Manual Deployment, DB Configurations on property files, Software Package Installation and upgradation on servers, OS Patch Activity, VAPT. Notice Period : Available to join in less than 30 days Emails : rohit.g@hotfoot.co.in / Nisha.ap@hotfoot.co.in
Company Description Hotfoot Technology Solutions is a pioneering FINTECH and CreditTech company providing end-to-end digital and automated lending solutions to financial institutions and loan aggregators. Founded in 2016 and headquartered in Chennai with an additional office in Noida, Hotfoot focuses on revolutionizing lending practices through innovation and automation. Our dynamic team of 200+ members strives to enhance customer experiences by simplifying and expediting financial services. Recognized as a top start-up in Chennai and awarded for our innovative platform, we are committed to setting trends in financial technology. Role Description This is a full-time, on-site role for a Quality Assurance Automation Engineer located in Chennai. The Quality Assurance Automation Engineer will be responsible for developing and executing automated test scripts, creating and managing test cases, conducting manual and software testing, and ensuring the quality of our products. The role involves closely working with the development team to identify and resolve issues, improving test processes, and contributing to the continuous improvement of our testing practices. The ideal candidate will have a solid understanding of automation testing principles and hands-on experience in automating tests for web, API, and mobile applications. Prior exposure to databases and experience in the fintech domain will be considered a strong advantage. Key Responsibilities Design and develop a robust automation testing framework from scratch, including architecture, reusable components, and best practices using a codeless automation tool. Develop and execute both automated and manual test cases to ensure comprehensive coverage of functional requirements. Use codeless automation platforms to validate functionality, usability, and reliability across web, mobile, and API interfaces. Maintain and update automated test suites to accommodate new features and ensure strong regression coverage. Collaborate with development teams to log, prioritize, and resolve bugs efficiently. Actively participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives. Perform database validations using basic SQL queries to ensure data accuracy. Generate clear and concise test reports and quality metrics to identify trends and potential risks. Continuously apply automation testing best practices to improve test efficiency and minimize manual testing efforts. Required Skills 3+ years of experience in automation testing. Practical experience with automation tools for web, API, and mobile testing. Strong foundation in automation testing concepts, including test design, reusable components, element locators, assertions, and validations. Experience with or willingness to learn codeless automation platforms. Familiarity with database fundamentals and proficiency in writing simple SQL queries. Experience using bug tracking and test management tools (e.g., JIRA, TestRail, etc.). Prior experience in the Loan Origination System (LOS) or fintech domain is a significant plus. Educational Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related fields (B.Tech/B.E/B.Sc/M.Sc/BCA/MCA). Show more Show less
Are you a dynamic individual with a passion for human resources and a strong command of English? Hotfoot Technology Solutions is seeking a talented HR intern to join our team! As an intern, you will have the opportunity to gain valuable hands-on experience in the field of HR while working in a fast-paced and innovative tech company. Key Responsibilities: 1. Assist HR team in recruitment process by screening resumes and scheduling interviews 2. Help with employee onboarding process, including preparing new hire paperwork and orientation 3. Assist in maintaining employee records and updating HR databases 4. Collaborate with HR team to plan and coordinate employee engagement activities and events 5. Participate in HR projects and initiatives to support company goals and objectives 6. Provide general administrative support to the HR department as needed If you are a team player, detail-oriented, and eager to learn, this is the perfect opportunity for you to kickstart your career in HR. Apply now and be a part of our exciting journey at Hotfoot Technology Solutions Job Types: Full-time, Fresher, Internship Contract length: 3-6 months Pay: ₹100,000.00 - ₹200,000.00 per year Benefits: Flexible schedule Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Noida Sector 62, Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Required) Application Question(s): This is internship with full time placement of job opportunity are you open ? Work Location: In person
We are looking for a Document Extraction and Inference Engineer with expertise in traditional machine learning algorithms and rule based NLP techniques. The ideal candidate will have a strong foundation in document processing, structured data extraction, and inference modeling using classical ML approaches. You will work on designing, implementing, and optimizing document extraction pipelines for various applications, ensuring accuracy and efficiency. Key Responsibilities Develop and implement document parsing and structured data extraction techniques. Utilize OCR (Optical Character Recognition) and pattern-based NLP for text extraction. Optimize rulebased and statistical models for document classification and entity recognition. Design feature engineering strategies for improving inference accuracy. Work with structured and semistructured data (PDFs, scanned documents, XML, JSON). Implement knowledgebased inference models for decisionmaking applications. Collaborate with data engineers to build scalable document processing pipelines. Conduct error analysis and improve extraction accuracy through iterative refinements. Stay updated with advancements in traditional NLP and document processing techniques. Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or related field. 3+ years of experience in document extraction and inference modeling. Strong proficiency in Python and ML libraries (Scikit-learn, NLTK, OpenCV, Tesseract). Expertise in OCR technologies, regular expressions, and rule-based NLP. Experience with SQL and database management for handling extracted data. Knowledge of probabilistic models, optimization techniques, and statistical inference. Familiarity with cloud-based document processing (AWS Textract, Azure Form Recognizer). Strong analytical and problem-solving skills. Preferred Qualifications Experience with graphbased document analysis and knowledge graphs. Knowledge of time series analysis for document-based forecasting. Exposure to reinforcement learning for adaptive document processing. Understanding of the credit / loan processing domain. Location: Chennai, India Show more Show less
Role Brief: The ideal candidate will have a strong background in software engineering with a focus on DevOps practices and tools with extensive experience in shell scripting, application deployment, and release management. Primary responsibility is to streamlining our development processes, ensuring smooth deployments, and managing the release lifecycle for our flagship products. Role & Responsibilities Implement and maintain Deployment pipelines to facilitate seamless code integration and delivery. Ensure consistency across development, testing, and production environments. Set up and manage monitoring tools to ensure system reliability and performance. Analyze logs for troubleshooting and performance tuning. Work closely with development, QA, and operations teams to ensure successful delivery and deployment of applications. Develop, maintain, and optimize shell scripts for automation of tasks and processes. Manage and execute deployment processes for Java-based applications, ensuring high availability and performance. Oversee the end-to-end release process, including planning, scheduling, and coordinating releases across multiple environments. Troubleshooting techniques and fixing the code bugs Identifying and deploying cyber-security measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Role Requirements 7+ years of experience as a DevOps Engineer or similar software engineering role. Proficient in shell scripting (Bash, etc.) with a strong understanding of automation tools and frameworks. Proven experience in deploying and managing Java-based applications both manually and automatically. Experience in Manual deployment is a must. Solid understanding of release management processes and best practices. Version Control: Proficiency with version control systems (Git, Bitbucket etc.). Must have a strong experience in handling the Nginx Configuration files and handling Nginx as a reverse proxy. Experience in handling SSL Certificates and DNS management. Must have a strong Experience with environment configuration and management. Experience in handling MySQL/MariaDb Queries. Good to have knowledge on handling the YAML Property files. Understanding of Containerization (Docker) and Orchestration (Kubernetes, EKS) techniques. Strong analytical and troubleshooting skills. Excellent verbal and written communication skills. Key Skillset Linux, Bash Shell Scripting, Nginx, Java Microservices deployment, RabbitMQ Configuration and Setup, Server setup and configuration, Networking Concepts, AWS Resources like EC2,RDS, SSL Certificate Installation, Tomcat, Manual Deployment, DB Configurations on property files, Software Package Installation and upgradation on servers, OS Patch Activity, VAPT. Notice Period : Available to join in less than 30 days Emails : rohit.g@hotfoot.co.in / Nisha.ap@hotfoot.co.in
Hands-On | Micro Data Lakes | Enterprise Data Strategy Are you a hands-on Data Architect who thrives on solving complex data problems across structured and unstructured sources? Do you enjoy designing micro data lakes and driving enterprise-wide data strategy? If so, we want to hear from you. What You Will Do Design and build micro data lakes tailored to lending domain Define and implement enterprise data strategies including modelling, lineage, and governance Architect and build robust data pipelines for batch and real-time data ingestion Develop strategies for extracting, transforming, and storing data from APIs, PDFs, logs, databases, and more Establish best practices for data quality, metadata management, and data lifecycle control Hands-on in implementation of processes, strategies and tools to create differentiated products. – MUST HAVE Collaborate with engineering and product teams to align data architecture with business goals Evaluate and integrate modern data platforms and tools such as Databricks, Spark, Kafka, Snowflake, AWS, GCP, Azure Mentor data engineers and advocate for engineering excellence in data practices What You Bring 10+ years of experience in data architecture and engineering Deep understanding of structured and unstructured data ecosystems Hands-on experience with ETL, ELT, stream processing, querying and data modelling Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog and Python Expertise in cloud-native data platforms including AWS, Azure, or GCP Strong grounding in data governance, privacy, and compliance standards A strategic mindset with the ability to execute hands-on when needed Nice to Have Exposure to the lending domain Exposure to ML pipelines or AI integrations Background in fintech, lending, or regulatory data environments What We Offer An opportunity to lead data-first transformation, create products that accelerate AI adoption Autonomy to design, build, and scale modern data architecture A forward-thinking, collaborative, and tech-driven culture Access to the latest tools and technologies in the data ecosystem Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office Ready to design the future of data with us? Apply Now! #DataArchitect #DataEngineering #MicroDataLakes #EnterpriseData #BigData #CloudData #StreamingData #ETL #DataStrategy #FintechJobs #HiringNow #AWS #GCP #Azure #Kafka #Spark #AI #ML
Are you a hands-on Data Architect who excels at tackling intricate data challenges within structured and unstructured sources Are you passionate about crafting micro data lakes and spearheading enterprise-wide data strategies If this resonates with you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specific to the lending domain. Additionally, you will play a key role in defining and executing enterprise data strategies encompassing modeling, lineage, and governance. Your tasks will involve architecting and implementing robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from various sources such as APIs, PDFs, logs, and databases. Establishing best practices for data quality, metadata management, and data lifecycle control will also be part of your core responsibilities. Collaboration with engineering and product teams to align data architecture with business objectives will be crucial, as well as evaluating and integrating modern data platforms and tools like Databricks, Spark, Kafka, Snowflake, AWS, GCP, and Azure. Furthermore, you will mentor data engineers and promote engineering excellence in data practices. The ideal candidate for this role should possess a minimum of 10 years of experience in data architecture and engineering, along with a profound understanding of structured and unstructured data ecosystems. Hands-on proficiency in ETL, ELT, stream processing, querying, and data modeling is essential, as well as expertise in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python. Familiarity with cloud-native data platforms like AWS, Azure, or GCP is required, alongside a solid foundation in data governance, privacy, and compliance standards. A strategic mindset coupled with the ability to execute hands-on tasks when necessary is highly valued. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. As part of our team, you will have the opportunity to lead data-first transformation and develop products that drive AI adoption. You will enjoy the autonomy to design, build, and scale modern data architecture within a forward-thinking, collaborative, and tech-driven culture. Additionally, you will have access to the latest tools and technologies in the data ecosystem. Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office If you are ready to shape the future of data alongside us, we invite you to apply now and embark on this exciting journey!,
Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,
Roles And Responsibilities Implement and maintain Continuous Integration and Continuous Deployment pipelines to facilitate seamless code integration and delivery. Ensure consistency across development, testing, and production environments. Set up and manage monitoring tools to ensure system reliability and performance. Analyze logs for troubleshooting and performance tuning. Work closely with development, QA, and operations teams to ensure successful delivery and deployment of applications. Develop, maintain, and optimize shell scripts for automation of tasks and processes. Manage and execute deployment processes for Java-based applications, ensuring high availability and performance. Oversee the end-to-end release process, including planning, scheduling, and coordinating releases across multiple environments. Troubleshooting techniques and fixing the code bugs Identifying and deploying cyber-security measures by continuously performing vulnerability assessment and risk management. Incidence management and root cause analysis. Required Skills Minimum qualification - Bachelor’s degree in Engineering or Computer Applications 3+ years of experience as a DevOps Engineer or similar software engineering role. Must have experience working with any cloud platforms like AWS,Azure or GCP. Experience working with Instance setup,handling S3 buckets/blob, and RDS. Proficient in shell scripting (Bash, etc.) with a strong understanding of automation tools and frameworks. Proven experience in deploying and managing Java-based applications both manually and automatically. Experience in Manual deployment is a must. Solid understanding of release management processes and best practices. Hands-on experience with CI/CD tools (Jenkins, GitLab CI, etc.). Version Control: Proficiency with version control systems (Git, Bitbucket etc.). Must have a strong experience in handling the Nginx Configuration files and handling Nginx as a reverse proxy. Experience in handling SSL Certificates and DNS management. Must have a strong Experience with environment configuration and management. Experience in handling MySQL/MariaDb Queries. Good to have knowledge on handling the YAML Property files. Understanding of Containerisation (Docker) and Orchestration (Kubernetes,EKS) techniques. Strong analytical and troubleshooting skills. Excellent verbal and written communication skills. Location: Chennai Experience: 3 - 8 years Notice period: Immediate Joiner, less than 30 days
You have an exciting opportunity to join our team as a Technical Lead with 10 to 15 years of experience. In this role, you will be responsible for understanding business requirements, product and platform architecture, contributing to architecture and design, and participating in day-to-day technology implementation under the guidance of architects. Your primary responsibilities will include taking ownership of specific components/sub-components, driving their design and implementation, leading the team responsible for delivering these components, and ensuring all necessary information for design, coding, testing, and release is available. You will also focus on designing reusable and maintainable components for applications. Additionally, you will be expected to mentor and guide senior and junior developers, assist in integrating functional implementations, and provide necessary support to ensure successful project deliveries. To excel in this role, you should have a strong background in web applications using Java and JEE technologies, particularly with Spring and Spring Boot. Demonstrated expertise in Java, JEE, JavaScript, jQuery, Bootstrap, HTML5, JPA, Hibernate, Spring Frameworks, and Spring Data is essential. Exposure to AWS cloud infrastructure, web services, database design, security best practices, performance tuning, and Agile/Scrum methodologies is also required. The ideal candidate will possess excellent written and verbal communication skills, strong analytical and problems-solving abilities, and the capability to work effectively within a team to meet project deadlines. This is a full-time position based in Chennai and Noida, and the role falls under the Development category. If you are a proactive and experienced Technical Lead looking for a challenging opportunity, we encourage you to apply and be a part of our dynamic team.,
We are searching for a talented Document Extraction and Inference Engineer who possesses expertise in traditional machine learning algorithms and rule-based NLP techniques. The ideal candidate will have a solid background in document processing, structured data extraction, and inference modeling using classical ML approaches. Your primary responsibility will involve designing, implementing, and optimizing document extraction pipelines for various applications with a focus on ensuring accuracy and efficiency. Key Responsibilities - Developing and implementing document parsing and structured data extraction techniques. - Utilizing OCR (Optical Character Recognition) and pattern-based NLP for text extraction. - Building Foundational Models (NOT LLM's) to address inference problems. - Optimizing rule-based and statistical models for document classification and entity recognition. - Designing feature engineering strategies to enhance inference accuracy. - Working with structured and semi-structured data (PDFs, scanned documents, XML, JSON). - Implementing knowledge-based inference models for decision-making applications. - Collaborating with data engineers to construct scalable document processing pipelines. - Conducting error analysis and enhancing extraction accuracy through iterative refinements. - Keeping abreast of advancements in traditional NLP and document processing techniques. Required Qualifications - Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. - 3+ years of experience in document extraction and inference modeling. - At least 5+ years of overall experience. - Strong proficiency in Python and ML libraries (Scikit-learn, NLTK, OpenCV, Tesseract). - Expertise in OCR technologies, regular expressions, and rule-based NLP. - Experience with SQL and database management for handling extracted data. - Knowledge of probabilistic models, optimization techniques, and statistical inference. - Familiarity with cloud-based document processing (AWS Textract, Azure Form Recognizer). - Strong analytical and problem-solving skills. Preferred Qualifications - Experience with graph-based document analysis and knowledge graphs. - Knowledge of time series analysis for document-based forecasting. - Exposure to reinforcement learning for adaptive document processing. - Understanding of the credit/loan processing domain. Location: Chennai, India.,
As a production support engineer, you will be responsible for resolving production support issues within the given SLA. Your responsibilities will include conducting Root Cause Analysis (RCA) for production issues and collaborating with the development team to implement permanent fixes. Additionally, you will participate in knowledge transfer sessions to enhance your understanding of the product and domain. Your role will involve suggesting solutions to complex issues by thoroughly analyzing the root cause and impact of defects. You will work closely with the Application Development Team to ensure successful deployment of software releases in both User Acceptance Testing and Production environments. Monitoring process and software changes that affect production support, communicating project information to the production support staff, and escalating production support issues to the project team will also be part of your duties. The ideal candidate should possess knowledge of operating systems such as Linux, Unix, and Windows. Hands-on experience with Oracle, SQL, and MySQL, as well as familiarity with patch deployments and server configurations, are essential requirements for this role. Strong written and verbal communication skills, excellent analytical abilities, and effective problem-solving skills are crucial. Being a good team player who can deliver results within specified timelines is highly valued. This position requires 3 to 7 years of experience in a similar role. There are 4 open positions available for this full-time (permanent) job opportunity. Candidates with a qualification of BE/ B.Tech/ B.Sc (CS)/ M.Sc (CS) are encouraged to apply. The expected notice period for applicants can be immediate, serving notice period, or less than 30 days. This job falls under the category of Development Testing and is a full-time position based in Chennai or Noida.,
Location: Chennai (HQ) - Onsite Function: Product Security Experience: 7–12 years (incl. 2+ years in a lead/ownership role) About the role We’re looking for an Product Security Lead to embed security into our SDLC and own end-to-end VAPT remediation across our lending product suite (LOS/LMS, rules engine, analytics). You’ll partner with engineering and platform teams to design, build, and operate secure-by-default products used by leading financial institutions. What you’ll do Own the Secure SDLC for microservices (Java/Spring Boot), Node/TypeScript backends, Angular UIs, and Android/Flutter apps—policy, standards, and release gates. Build and run CI/CD security controls: SAST, SCA/SBOM, secrets & IaC checks, container/image scanning; automate DAST/IAST in pipelines; enforce block-on-fail where needed. Drive VAPT end-to-end: scope with internal/third-party testers, triage findings, set SLAs, track remediation to closure; verify fixes and prevent regressions. Threat model & review designs/code for authN/Z, crypto, session management, API security, data protection/PII, and high-risk modules (payments, onboarding, documents). Cloud & platform security (AWS): baselines for EC2/ALB, RDS/KMS, S3 policies, network segmentation, mTLS/JWT service auth, Vault-backed secrets, and key rotation. Observability & governance: wire security logs to SIEM, define AppSec KPIs (MTTR, SLA adherence, gate coverage), and report risk posture to engineering leadership. Upskill teams: run secure coding workshops, build a “security champions” program, create playbooks/runbooks for common vulns and abuse cases. What you’ll bring 7–12 years in Application/Product Security , including leading Secure SDLC and VAPT remediation in a product engineering environment. Hands-on with SAST/SCA/DAST/IAST , code reviews, and threat modeling (e.g., STRIDE); ability to read code in Java/Spring , Node/TypeScript , and Angular . Prior experience in integrating security checks and gating critera with CI platform like SonarQube Strong grasp of OWASP Top 10, API Security Top 10, ASVS, CWE , secrets management, and CI/CD hardening. AWS security experience: IAM, KMS, RDS encryption, SG/WAF, CloudTrail/GuardDuty; familiarity with Docker/Kubernetes and IaC (Terraform/CloudFormation). Experience running vendor/3rd-party VAPT cycles and landing fixes to SLA with engineering teams. Awareness of compliance contexts (ISO 27001/SOC 2, RBI guidance, DPDP Act ) and secure handling of PII/financial data. Nice to have: mobile app security (OWASP MASVS), OAuth2/OIDC, mTLS, WebAuthn/modern auth patterns; Kafka, Redis, NGINX, Consul, Vault. Certifications (optional, a plus): OSWE/OSCP , GWAPT/GWEB , CSSLP . What success looks like (first 6 months) ≥ 95% of Critical/High findings closed within SLA across services. All repos behind security gates with SBOMs published; zero hard-coded secrets ; baseline threat models for top services. Repeatable VAPT → remediation → verification loop with dashboards visible to leadership. Why join us Build security for mission-critical fintech products at scale. High ownership, direct impact, and the chance to set the bar for product security across our stack. Collaborative culture with strong engineering, rapid delivery, and growth opportunities.
We are seeking a Document Extraction and Inference Engineer with proficiency in traditional machine learning algorithms and rule-based NLP techniques. As the ideal candidate, you will possess a solid background in document processing, structured data extraction, and inference modeling through classical ML methods. Your primary responsibility will involve designing, implementing, and enhancing document extraction pipelines for diverse applications to ensure both accuracy and efficiency. Your key responsibilities will include developing and executing document parsing and structured data extraction techniques, leveraging OCR and pattern-based NLP for text extraction, refining rule-based and statistical models for document classification and entity recognition, creating feature engineering strategies to enhance inference accuracy, handling structured and semi-structured data such as PDFs, scanned documents, XML, and JSON, implementing knowledge-based inference models for decision-making purposes, collaborating with data engineers to construct scalable document processing pipelines, performing error analysis, and enhancing extraction accuracy through iterative refinements. Additionally, you will be required to stay abreast of the latest advancements in traditional NLP and document processing techniques. To qualify for this role, you must hold a Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field, accompanied by a minimum of 3 years of experience in document extraction and inference modeling. Proficiency in Python and ML libraries like Scikit-learn, NLTK, OpenCV, and Tesseract is essential, along with expertise in OCR technologies, regular expressions, and rule-based NLP. You should also have experience with SQL and database management for handling extracted data, knowledge of probabilistic models, optimization techniques, and statistical inference, familiarity with cloud-based document processing tools such as AWS Textract and Azure Form Recognizer, as well as strong analytical and problem-solving skills. Preferred qualifications for this role include experience in graph-based document analysis and knowledge graphs, knowledge of time series analysis for document-based forecasting, exposure to reinforcement learning for adaptive document processing, and an understanding of the credit/loan processing domain. This position is based in Chennai, India.,