Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
An ideal candidate for the Model QA Analyst role would have a strong background in mathematics and statistics, along with some programming knowledge. This expertise is essential for understanding the financial losses caused by natural perils like earthquakes, tropical cyclones, and other disasters, as the role involves analysing and validating models that predict such events and their financial impact. Responsibilities Reviewing requirement documents to understand what needs testing and the scope of updates for models/products. Learning about natural disasters (earthquakes, Cyclone & floods etc.) and how they are modelled and predicted. Creating and approving detailed test plans, ensuring all aspects are covered based on client needs and model details. Executing tests on time and ensuring models are tested & delivered with high quality. Validating model outputs across different platforms and checking for consistency. Analysing model results using statistical methods and ensuring algorithms meet expected standards. Ensuring thorough testing of all features and helping new team members get up to speed quickly. Presenting test results to peers, estimating project timelines, and managing changes in requirements. Handling the hardware and software needs for testing, working with development & PO teams to get timely fixes and validate. Designing test automation processes and managing the entire testing cycle. Providing regular status updates to product owners and helping improve decision-making with clear communication. Expanding automated testing practices throughout the project life cycle and collaborating with teams to meet deadlines. Managing multiple product deliveries at once and learning new technologies like Touchstone software to improve testing efficiency. Qualifications A bachelors or post graduate degree in engineering, Mathematics or Statistics with strong credentials. 6-8 years of experience in data analytics or catastrophe modelling. Proficiency with relational databases (e.g., MS-SQL) and mentoring abilities. Experience in analysing and manipulating large datasets. Familiarity with statistical and mathematical software (e.g., R, Python) and strong MS Excel skills. Knowledge of geo-spatial data processing using GIS software (e.g., QGIS, ArcGIS). Strong analytical skills to identify patterns and mathematical relationships in data. Experience with probabilistic models in engineering, science, catastrophe modelling, finance, or actuarial science. Programming skills in numerical, scientific, and database programming. Excellent problem-solving and attention to detail. Experience working with global, diverse teams. Strong communication skills to work with cross-functional teams. Understanding of product development life cycles and the role of quality assurance. Logical thinking and process-driven decision-making. Strong multitasking abilities with a focus on accuracy and efficiency.,
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
About the Role : Property Treaty Underwriting Advisor (Americas) Provide high quality Underwriting analytics - both Nat cat and Fire costing (occasionally) to various underwriting desks at Swiss Re in the North Americas, which will include but not limited to: Analysis of property exposure and investigation of various data issues Modeling of property portfolios in Swiss Re internal and external risk assessment tools (e.g. Risk Link, MultiSnap, GRP) Advising underwriters in understanding model results, model differences and limitations Collaborating with underwriters in benchmarking model results and costing/pricing of reinsurance contracts - providing in a more advanced state end-to-end product design and costing. Continuously developing and applying best modeling and costing practices Collaborating with the underwriting community abroad and other business partners on various projects to generate new value-added services for Swiss Re Working with underwriters during peak renewal season to model/cost treaty programs Using the findings from underwriting analyses to improve Nat cat models and tools Using insights gained from dealing with large data amounts for creating new/advanced analytics and client services Team We are looking for a candidate with a strong academic record and industry experience in natural catastrophe modelling or Fire costing or both, for our Property Underwriting analytics group at Bangalore. This job offers you a chance to join the dynamic, fast paced world of a highly specialized financial services organization. You get in-depth exposure to Swiss Re's proprietary natural perils model platform and costing tools and to natural catastrophe reinsurance business from Swiss Re's global client base. Depending upon your inclination and skills, over time you will have the opportunity to widen your roles in Swiss Re in multiple areas. About you : Job requirements: An advanced degree in the field of natural science or a quantitative field which is closely related to these subject areas -Statistics, Engineering, Sciences About 5 years of experience in using CAT models and/or property underwriting tools. Experience in probabilistic models or pursuing actuarial studies is plus An interest to actively build/improvise costing and pricing techniques for (re)insurance products Strong motivation to learn about insurance and reinsurance business Embrace teamwork and appreciate continuous communication and interaction in a global setting Strong analytical and critical thinking skills - with a can-do and pragmatic outlook A track record of successful delivery of complex analysis results to internal or external clients Excellent oral and written English skills Keywords: Reference Code: 135184
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a Document Extraction and Inference Engineer with proficiency in traditional machine learning algorithms and rule-based NLP techniques. As the ideal candidate, you will possess a solid background in document processing, structured data extraction, and inference modeling through classical ML methods. Your primary responsibility will involve designing, implementing, and enhancing document extraction pipelines for diverse applications to ensure both accuracy and efficiency. Your key responsibilities will include developing and executing document parsing and structured data extraction techniques, leveraging OCR and pattern-based NLP for text extraction, refining rule-based and statistical models for document classification and entity recognition, creating feature engineering strategies to enhance inference accuracy, handling structured and semi-structured data such as PDFs, scanned documents, XML, and JSON, implementing knowledge-based inference models for decision-making purposes, collaborating with data engineers to construct scalable document processing pipelines, performing error analysis, and enhancing extraction accuracy through iterative refinements. Additionally, you will be required to stay abreast of the latest advancements in traditional NLP and document processing techniques. To qualify for this role, you must hold a Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field, accompanied by a minimum of 3 years of experience in document extraction and inference modeling. Proficiency in Python and ML libraries like Scikit-learn, NLTK, OpenCV, and Tesseract is essential, along with expertise in OCR technologies, regular expressions, and rule-based NLP. You should also have experience with SQL and database management for handling extracted data, knowledge of probabilistic models, optimization techniques, and statistical inference, familiarity with cloud-based document processing tools such as AWS Textract and Azure Form Recognizer, as well as strong analytical and problem-solving skills. Preferred qualifications for this role include experience in graph-based document analysis and knowledge graphs, knowledge of time series analysis for document-based forecasting, exposure to reinforcement learning for adaptive document processing, and an understanding of the credit/loan processing domain. This position is based in Chennai, India.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are searching for a talented Document Extraction and Inference Engineer who possesses expertise in traditional machine learning algorithms and rule-based NLP techniques. The ideal candidate will have a solid background in document processing, structured data extraction, and inference modeling using classical ML approaches. Your primary responsibility will involve designing, implementing, and optimizing document extraction pipelines for various applications with a focus on ensuring accuracy and efficiency. Key Responsibilities - Developing and implementing document parsing and structured data extraction techniques. - Utilizing OCR (Optical Character Recognition) and pattern-based NLP for text extraction. - Building Foundational Models (NOT LLM's) to address inference problems. - Optimizing rule-based and statistical models for document classification and entity recognition. - Designing feature engineering strategies to enhance inference accuracy. - Working with structured and semi-structured data (PDFs, scanned documents, XML, JSON). - Implementing knowledge-based inference models for decision-making applications. - Collaborating with data engineers to construct scalable document processing pipelines. - Conducting error analysis and enhancing extraction accuracy through iterative refinements. - Keeping abreast of advancements in traditional NLP and document processing techniques. Required Qualifications - Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. - 3+ years of experience in document extraction and inference modeling. - At least 5+ years of overall experience. - Strong proficiency in Python and ML libraries (Scikit-learn, NLTK, OpenCV, Tesseract). - Expertise in OCR technologies, regular expressions, and rule-based NLP. - Experience with SQL and database management for handling extracted data. - Knowledge of probabilistic models, optimization techniques, and statistical inference. - Familiarity with cloud-based document processing (AWS Textract, Azure Form Recognizer). - Strong analytical and problem-solving skills. Preferred Qualifications - Experience with graph-based document analysis and knowledge graphs. - Knowledge of time series analysis for document-based forecasting. - Exposure to reinforcement learning for adaptive document processing. - Understanding of the credit/loan processing domain. Location: Chennai, India.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We are seeking engineers or applied scientists who have expertise in implementing data mining and machine learning systems to contribute to the development of an innovative online education platform that is tailored to the unique needs of each student. Your responsibilities will include: - Demonstrating a strong passion and enthusiasm for constructing scalable systems that can analyze extensive datasets and provide actionable insights. - Showcasing a proven track record of identifying connections within vast and seemingly unrelated datasets. - Displaying the ability to work with incomplete or imperfect data sets effectively. - Possessing knowledge of linear algebra and proficiency in manipulating data through matrix algorithms. - Having the capability to develop and interpret probabilistic models of intricate, multi-dimensional systems. - Demonstrating experience in quantitative analysis using Python. - Having familiarity with database technologies like SQL. - Showcasing solid programming skills in Java, C, C++, or Python. Preferred experience with the following technologies: - Collaborative filtering. - Decision trees and automatic tree generation. - Bayesian methods. - Clustering techniques including principal components analysis, k-means, etc. Desired qualifications include: - Proficiency in Hadoop or other MapReduce implementations. - Experience in semantic text mining or natural language processing. - Prior involvement in high-stakes information retrieval and statistical analysis, such as Bioinformatics or Fraud Detection. If you believe you meet these qualifications and are excited about this opportunity, please send your resume to info@zunopy.com.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |