As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Master’s Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.
Role & responsibilities Gen AI Engineer Work Mode :Hybrid Work Location : Chennai / Hyderabad Work Timing : 2 PM to 11 PM Primary : GEN AI Python, AWS Bedrock, Claude, Sagemaker , Machine Learning experience) 8+ years of full-stack development experience 5+ years of AI/ Gen AI development Strong proficiency in JavaScript/TypeScript, Python, or similar languages Experience with modern frontend frameworks (React, Vue.js, Angular) Backend development experience with REST APIs and microservices Knowledge of AWS services, specifically AWS Bedrock, Sagemaker Experience with generative AI models, LLM integration and Machine Learning Understanding of prompt engineering and model optimization Hands-on experience with foundation models (Claude, GPT, LLaMA, etc.) Experience retrieval-augmented generation (RAG) Knowledge of vector databases and semantic search AWS cloud platform expertise (Lambda, API Gateway, S3, RDS, etc.) Knowledge of financial regulatory requirements and risk frameworks. Experience integrating AI solutions into financial workflows or trading systems. Published work or patents in financial AI or applied machine learning.
Role & responsibilities Preferred candidate profile We are seeking a skilled Data Scientist with strong expertise in Python programming and Amazon SageMaker to join our data team. The ideal candidate will have a solid foundation in machine learning, data analysis, and cloud-based model deployment. You will work closely with cross-functional teams to build, deploy, and optimize predictive models and data-driven solutions at scale. Primary Skills Bachelors or Master's degree in Computer Science, Data Science, Statistics, or a related field. 12+ years of experience in data science or machine learning roles. Proficiency in Python and popular ML libraries (e.g., scikit-learn, pandas, NumPy). Hands-on experience with Amazon SageMaker for model training, tuning, and deployment. Strong understanding of supervised and unsupervised learning techniques. Experience working with large datasets and cloud platforms (AWS preferred). Excellent problem-solving and communication skills. Secondary Skill Experience with AWS services beyond SageMaker (e.g., S3, Lambda, Step Functions). Familiarity with deep learning frameworks like TensorFlow or PyTorch. Exposure to MLOps practices and tools (e.g., CI/CD for ML, MLflow, Kubeflow). Knowledge of version control (e.g., Git) and agile development practices.