Jobs
Interviews

81 Palantir Foundry Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

11 - 21 Lacs

kolkata, mumbai, gurugram

Hybrid

Responsibilities: JD Palantir Foundry • 3+ years of experience in implementing analytical solutions using Palantir Foundry. • preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. • Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. • Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. • At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. • At least 3 years of experience with Foundry services: • Data Engineering with Contour and Fusion • Dashboarding, and report development using Quiver (or Reports) • Application development using Workshop. • Exposure to Map and Vertex is a plus • Palantir AIP experience will be a plus • Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. • Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. • Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). • Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well.

Posted 2 days ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

Role Overview: You will be part of the Trainee GOglobal Data & AI Program at Merck Group, a 2-year program that offers diverse experiences in data & AI-driven projects. As a participant, you will have the opportunity to work in both Bangalore, India, and Darmstadt, Germany, gaining exposure to various industries, technologies, and applications. Your role will involve contributing to data science and engineering projects, collaborating with different teams, and driving the development of innovative products within the business. Key Responsibilities: - Learn about the Groups Data & AI Strategy, Data Culture, and Data Governance while supporting products and projects within the Merck Data & AI Organization. - Apply and strengthen your data & AI skills by contributing to projects in various sectors such as Manufacturing, Supply Chain, R&D, Marketing, and Sales. - Expand your global network by engaging with other data & AI enthusiasts across Merck. Qualifications Required: - Masters degree or PhD in Data Science, Computer Science, Mathematics, (Industrial) Engineering, Computational Chemistry, Physics, or a related field. - Mandatory programming experience with languages such as Python, R, Matlab, Java, or C++. - Prior experience with Palantir Foundry or AWS is beneficial. - Ideally, you have completed two data & AI related internships and have international experience. - Strong interest in AI, machine learning, data engineering, and analysis. - Excellent verbal and written communication skills in English. - Strong analytical skills, ability to think creatively, and a pragmatic approach to work. - Comfortable working with individuals from diverse backgrounds and cultures. - Ability to communicate results to both technical and non-technical audiences. - Strong interpersonal skills with a willingness to share knowledge and skills. Additional Details: Merck Group is committed to fostering a culture of inclusion and belonging, where individuals from diverse backgrounds can excel and innovate together. The company values curiosity and offers opportunities for personal and professional growth through mentorship, networking, and continuous learning. Join a team dedicated to sparking discovery and championing human progress by applying now for the Trainee GOglobal Data & AI Program.,

Posted 2 days ago

Apply

3.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer specializing in Python and PySpark, you will be responsible for designing, building, and maintaining scalable ETL pipelines. Your key responsibilities will include: - Designing, building, and maintaining scalable ETL pipelines using Python and PySpark. - Ensuring data quality, reliability, and governance across systems and pipelines. - Collaborating with cross-functional teams to understand business requirements and translate them into technical solutions. - Performing performance tuning and troubleshooting of big data applications. - Working with large datasets from multiple sources and preparing them for analytics and reporting. - Following best practices in coding, testing, and deployment for enterprise-grade data applications. To qualify for this role, you should have: - 3-10 years of professional experience in data engineering, preferably in utility, energy, or related industries. - Strong proficiency in Python programming. - Hands-on experience with PySpark for big data processing. - Good understanding and working knowledge of Palantir Foundry. - Experience with SQL and handling large datasets. - Familiarity with data governance, data security, and compliance requirements in enterprise environments. - Strong problem-solving and analytical skills. - Excellent communication and collaboration skills. Nice-to-have skills include experience with cloud data services (AWS, Azure, GCP), knowledge of utilities domain data models and workflows, exposure to DevOps/CI-CD pipelines, familiarity with visualization tools like Tableau, Power BI, or Palantir dashboards, and experience in Palantir Foundry. In this role, you will have the opportunity to work on cutting-edge Palantir Foundry-based solutions in the utilities sector, be part of a dynamic, collaborative, and innovation-driven team, and grow your technical expertise across modern data platforms.,

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hirePalantir Professionals in the following areas : Job description: We are seeking a highly motivated and technically skilled Senior Data Engineer to join our data team. The ideal candidate will have extensive experience designing, building, and optimizing scalable data pipelines and analytics solutions using Palantir Foundry, along with broader expertise in data architecture, governance, and processing. This role offers the opportunity to work on cutting-edge data platforms and deliver impactful solutions for enterprise data integration, transformation, and advanced analytics. Key Responsibilities: Design, implement, and maintain scalable and reliable data pipelines using Palantir Foundry. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions. Build reusable data assets, integrate data from multiple sources, and ensure data quality, integrity, and security. Develop and maintain transformation workflows, ontology models, and data governance frameworks within Foundry. Optimize data pipelines for performance, scalability, and cost efficiency. Troubleshoot, monitor, and resolve data issues and pipeline failures in real-time. Implement best practices in data versioning, lineage, and observability. Assist in defining data architecture strategies and tooling roadmaps. Mentor junior engineers and contribute to team knowledge sharing and documentation. Required Qualifications: 5+ years of experience in data engineering, analytics, or software engineering roles. Strong hands-on experience with Palantir Foundry, including ontology modeling, pipelines, data fusion, and transformation logic. Proficient in Python, SQL, and data processing frameworks. Solid understanding of data integration, ETL workflows, and data modeling principles. Experience working with relational databases, data lakes, cloud data warehouses (AWS, Azure, GCP). Strong problem-solving skills and ability to debug complex data workflows. Familiarity with CI/CD pipelines and version control tools like Git. Excellent communication skills and ability to work collaboratively with cross-functional teams. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Role Overview: YASH Technologies is seeking AWS Professionals with expertise in AWS services such as Glue, Pyspark, SQL, Databricks, Python, and more. As an AWS Data Engineer, you will be responsible for designing, developing, testing, and supporting data pipelines and applications. This role requires a degree in computer science, engineering, or related fields along with strong experience in data integration and pipeline development. Key Responsibilities: - Design, develop, test, and support data pipelines and applications using AWS services like Glue, Pyspark, SQL, Databricks, Python, etc. - Work with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems. - Utilize SQL in the development of data warehouse projects/applications (Oracle & SQL Server). - Develop in Python especially in PySpark in AWS Cloud environment. - Work with SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch. - Manage workflow using tools like Airflow. - Utilize AWS cloud services such as RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. - Familiarity with Snowflake and Palantir Foundry is a plus. Qualifications Required: - Bachelor's degree in computer science, engineering, or related fields. - 3+ years of experience in data integration and pipeline development. - Proficiency in Python, PySpark, SQL, and AWS. - Strong experience with data integration using AWS Cloud technologies. - Experience with Apache Spark, Glue, Kafka, Kinesis, Lambda, S3 Redshift, RDS, MongoDB/DynamoDB ecosystems. - Hands-on experience with SQL in data warehouse projects/applications. - Familiarity with SQL and NoSQL databases. - Knowledge of workflow management tools like Airflow. - Experience with AWS cloud services like RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Note: The JD also highlights YASH Technologies" empowering work environment that promotes career growth, continuous learning, and a positive, inclusive team culture grounded in flexibility, trust, transparency, and support for achieving business goals.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

As an experienced Data Engineer with 6-9 years of experience, your role will involve developing and maintaining data pipelines using Python, PySpark, and SQL for data transformations and workflows in Foundry. Additionally, you will be responsible for building user interfaces, dashboards, and visualizations within Foundry's Workshop application for data analysis and reporting. Collaboration with stakeholders, including data engineers, business analysts, and business users, to gather requirements and design solutions will be key to ensuring project success. You will also play a crucial role in ensuring data quality and performance by implementing validation, testing, and monitoring processes. Your contribution to the Foundry ecosystem through code reviews, documentation, and sharing best practices will be vital for strengthening overall platform adoption and success. Key Responsibilities: - Develop and maintain data pipelines using Python, PySpark, and SQL for data transformations and workflows in Foundry - Build user interfaces, dashboards, and visualizations within Foundry's Workshop application for data analysis and reporting - Collaborate with stakeholders to gather requirements, design solutions, and ensure project success - Ensure data quality and performance by implementing validation, testing, and monitoring processes - Contribute to the Foundry ecosystem through code reviews, documentation, and sharing best practices Qualifications Required: - Bachelor's degree in computer science, Data Science, or a related field; advanced degree preferred - Palantir Foundry Expertise: Hands-on experience with Foundry components such as Workshop, Code Repositories, Pipeline Builder, and Ontology - Programming Skills: Proficiency in Python and PySpark for data manipulation and pipeline development - Database Knowledge: Strong SQL skills for data extraction, transformation, and query optimization - Data Engineering Background: Experience with ETL/ELT workflows, data modeling, and validation techniques - Cloud Platform Familiarity: Exposure to GCP, AWS or Azure is preferred - Collaboration and Communication: Strong interpersonal and communication skills to work effectively with technical and non-technical stakeholders,

Posted 3 days ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

bengaluru

Work from Office

Were Hiring: Senior Data Engineer (Palantir Foundry) We are looking for a highly motivated and technically skilled Senior Data Engineer to join our growing data team. If you are passionate about building scalable data pipelines and delivering impactful enterprise data solutions using Palantir Foundry, this role is for you! Location: Bangalore Experience: 5+ years Skill Focus: Palantir Foundry + Data Engineering Key Responsibilities: Design and maintain scalable & reliable data pipelines using Palantir Foundry Collaborate with cross-functional teams to deliver enterprise-grade data solutions Build reusable data assets, integrate multi-source data & ensure governance Develop workflows, ontology models, and transformation logic in Foundry Required Qualifications: 5+ years in Data Engineering/Analytics/Software Engineering Strong hands-on expertise with Palantir Foundry (pipelines, ontology, data fusion) Proficiency in Python, SQL, ETL workflows, and data modeling Experience with databases, data lakes, cloud warehouses (AWS/Azure/GCP) Familiarity with CI/CD, Git, and modern DevOps practices

Posted 5 days ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

bengaluru

Work from Office

Field: Information Technology, Data Management, Data Analytics, Business, Supply Chain, Operations Experience: Bachelors or Masters degree in Data Science, Computer Science, Statistics, or a related field Number of Years: 5+ years in relevant roles with Palantir experience/ knowledge is Must Other: At least five years of relevant project experience in successfully launching, planning, and executing data science projects, including statistical analysis, data engineering, and data visualization Proven experience in conducting statistical analysis and building models with advanced scripting languages Experience leading projects that apply ML and data science to business functions Specialization in text analytics, image recognition, graph analysis, or other specialized ML techniques, such as deep learning, is preferred Skills: Fluency in multiple programming languages and statistical analysis tools such as Python, PySpark, C++, JavaScript, R, SAS, Excel, SQL Knowledge of distributed data/computing tools such as MapReduce, Hadoop, Hive, or Kafka Knowledge of statistical and data mining techniques such as generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN) Strong understanding of AI, its potential roles in solving business problems, and the future trajectory of generative AI models Willingness and ability to learn new technologies on the job Ability to communicate complex projects, models, and results to a diverse audience with a wide range of understanding Ability to work in diverse, cross-functional teams in a dynamic business environment Superior presentation skills, including storytelling and other techniques to guide and inspire Familiarity with big data, versioning, and cloud technologies such as Apache Spark, Azure Data Lake Storage, Git, Jupyter Notebooks, Azure Machine Learning, and Azure Databricks. Familiarity with data visualization tools (Power BI experience preferred). Knowledge of database systems and SQL. Strong communication and collaboration abilities.

Posted 6 days ago

Apply

3.0 - 10.0 years

1 - 25 Lacs

hyderabad, chennai, bengaluru

Work from Office

Roles and Responsibilities : Design, develop, test, and deploy large-scale data pipelines using Python and SQL. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Develop complex data models using Palantir's Foundry platform to support business intelligence initiatives. Troubleshoot issues related to data quality, performance, and scalability in a fast-paced environment. Job Requirements : 3-10 years of experience in data engineering with expertise in Python programming language. Strong understanding of SQL database management systems (e.g., PostgreSQL) for querying large datasets. Proficiency in developing ETL processes using Palantir's Foundry platform or similar tools. Experience working with big data technologies such as Hadoop or Spark.

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About the Role We are looking for a versatile and drivenEngineerwith a strong foundation inData Engineeringand a growing focus onAI Engineering. This role is pivotal in designing and delivering robust data pipelines and platforms that power operational processes and advanced analytics, while also contributing to the development and deployment of AI-driven solutions. You will work closely with cross-functional teams to build scalable data infrastructure, ensure data quality, and support AI initiatives that drive business value. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes. Build and optimize data storage solutions such as data lakes and data warehouses. Integrate data from diverse sources including APIs, databases, and third-party providers. Ensure data quality, consistency, and governance across systems. Develop and maintain data models to support analytical and operational use cases. Collaborate with business analysts and stakeholders to translate requirements into technical specifications. Monitor and enhance the performance of data systems and resolve bottlenecks. Document data engineering processes and promote best practices. Support the development and deployment of Gen AI applications. Assist in designing AI solutions from PoC into production environments. Contribute to model validation, monitoring, and performance tuning. Stay current with emerging AI technologies and tools. About the team: The role is assigned to the Data Engineering & Analytics Product Area (Area III) which is the data engineering and analytics backbone for business teams spanning HR, Legal & Compliance, Procurement, Communications, Branding, Marketing & Corporate Real Estate About you: As a successful candidate for this role, you possess the below traits. Must Have: Experience with data modeling and schema design Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, Statistics or a related field. 4-6 years of experience in data engineering, with exposure to AI/ML workflows. And end-to-end pipelines from sourcing to reporting. Knowledge of data quality best practices with experience in reconciliation testing. Proficient in Python, PySpark, cloud platforms such as Palantir Foundry or Azure Databricks / Azure Synapse / MS Fabric. Experience with data pipeline tools (e.g., Airflow, Azure Data Factory) and data lake architectures. Knowledge of integration technologies like REST/SOAP APIs and event-driven architectures. Strong problem-solving skills and a commitment to delivering high-quality solutions. Excellent communication skills and ability to work with both technical and non-technical stakeholders. A desire to continuously upskill & stay updated with emerging technologies Good to have: Certified in Palantir Foundry / experience with Palantir AIP. In depth knowledge of LLM's, AI Agents, RAG Architectures and Agentic Flows. Experience in building chatbots/applications using LLM's, AI Agents, RAG Architectures and Agentic Flows. Familiarity with machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). Proficiency in SQL and experience with relational databases (e.g. Oracle, Azure SQL). About Swiss Re Swiss Re is one of the world's leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134837

Posted 6 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

pune

Remote

Job Title: Palantir Application Engineer Data Reporting & Visualization Location: Remote/Offshore Shift (UK shift) – 1 PM – 10 PM IST Employment Type: Full-time - Contract Experience Level: 5+ years Job Summary: We are seeking a skilled Palantir Application Engineer with hands-on expertise in building data-driven applications, reporting dashboards and visualization solutions using Palantir platforms (Foundry/Gotham). The ideal candidate will work closely with business stakeholders, data engineers and analysts to design and deliver intuitive, high-performance applications that transform complex datasets into actionable insights. Key Responsibilities: * Develop, configure, and maintain Palantir applications for data reporting, dashboards, and visualizations. * Strong hands-on experience with Palantir Foundry (preferred) or Gotham. * Good understanding of data modeling and business intelligence concepts. * Strong SQL skills and ability to work with large, complex datasets. * Proficiency in Python, JavaScript, or similar scripting languages for customization. * Expertise in creating reports, dashboards, and visualization workflows within Palantir. * Collaborate with business users to gather requirements and translate them into application features. * Build interactive workflows, operational dashboards, and decision-support tools within Palantir. * Work with data pipelines and data models prepared by data engineering teams to ensure applications are performant and reliable. * Ensure data visualization best practices are applied for usability and accuracy. * Provide user training, documentation, and support for Palantir applications. * Troubleshoot and resolve issues related to application functionality, reporting, and data presentation. Good to Have (Optional): * Prior experience in Oil & Gas, Energy or Industrial domains. * Knowledge of other BI/visualization tools (Tableau, Power BI, Qlik, etc.). * Familiarity with cloud platforms (AWS/Azure/GCP) and data integration. * Experience with workflow automation inside Palantir.

Posted 1 week ago

Apply

6.0 - 11.0 years

6 - 16 Lacs

hyderabad, chennai, bengaluru

Hybrid

Palantir Developer Greeting for the LTIMindtree please fill the below Form For Further steps https://forms.office.com/r/iZHbVEkEcU Please Share your update Resume : NarayanaHemanth.Kumar@ltimindtree.com

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

bengaluru

Work from Office

Role Overview: We are seeking a highly experienced Palantir Developer/ Architect with over 5+ years of IT experience and deep expertise in architecting and delivering Palantir Foundry or Gotham solutions for large-scale enterprises. The ideal candidate will have a strong background in data engineering, solution design, cloud platforms, and enterprise integration , with the ability to engage senior stakeholders and lead end-to-end Palantir engagements from strategy through implementation. Key Responsibilities: Solution Architecture & Design Architect and design scalable Palantir Foundry/Gotham solutions tailored to client business needs. Define data pipelines, ontologies, object models, and integration patterns. Ensure solutions align with enterprise architecture standards and best practices. Delivery Leadership Lead end-to-end implementation projects, including requirement analysis, design, development, deployment, and support. Provide technical leadership to data engineers, developers, and consultants working on Palantir platforms. Drive adoption of Palantir best practices and reusable frameworks. Client & Stakeholder Management Act as a trusted advisor to C-level and senior stakeholders for Palantir-driven digital transformation. Translate complex business requirements into data-driven solutions leveraging Palantir. Collaborate with cross-functional teams (cloud, security, analytics, ML/AI) for integrated solutions. Innovation & Enablement Evaluate and adopt new features in Palantir Foundry (Ontology, Code Repositories, Operational Object Models, Apollo). Contribute to accelerators, frameworks, and knowledge repositories to scale Palantir practices. Mentor and coach teams to upskill in Palantir development and architecture. Required Skills & Qualifications: Core Expertise : Hands-on architecture and delivery experience with Palantir Foundry or Gotham . Strong background in data integration, data engineering, and ontology design . Experience in Object-Oriented Ontologies, Foundry Transformations, Operational Workflows, and Palantir APIs . Proficiency in Python, SQL, Spark, Java/Scala, and distributed systems . Enterprise Architecture & Cloud : Experience designing Palantir solutions on AWS, Azure, or GCP . Strong understanding of data governance, security, and compliance in regulated industries. Leadership & Consulting : Proven track record of leading enterprise-scale Palantir engagements. Strong stakeholder management, communication, and client advisory skills. Ability to lead multi-disciplinary teams and drive successful delivery. Preferred Qualifications: Palantir Foundry/Gotham certification(s) . Experience in Defense, Public Sector, BFSI, Healthcare, or Manufacturing domains. Exposure to AI/ML model integration with Palantir Foundry . Previous experience with Big Data & Analytics platforms (Snowflake, Databricks, Hadoop, etc.).

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

pune, maharashtra, india

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, were a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Work Your Magic with us! Ready to explore, break barriers, and discover more We know youve got big plans so do we! Our colleagues across the globe love innovating with science and technology to enrich peoples lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet. That&aposs why we are always looking for curious minds that see themselves imagining the unimaginable with us. Trainee GOglobal Data & AI Program Merck Group 2 Years Program (All Genders) Start of the Program: September 2025 Work Location: Bangalore, India + an international assignment at our headquarter in Darmstadt, Germany Department: Merck Data & AI Organization Growth & Development & Experience A career at our company is an ongoing journey of discovery: our 60,000 people are shaping how the world lives, works and plays through next generation advancements in Healthcare, Life Science and Electronics. For more than 350 years and across the world we have passionately pursued our curiosity to find novel and vibrant ways of enhancing the lives of others. To achieve this, we simplify and reduce complexity where we see it, we take accountability to deliver with impact, and we are curious and challenge the status quo, these are all key attributes of our High Impact Culture! The Global Graduate Program is the companys career accelerator for high potential graduates who are early in their careers. Your Role During this 24-months program you will be taking an active part in our data & AI-driven, innovative, customers and patients-oriented organization. Potential assignments vary from learning about the groupwide strategic approach and technological backbone in the Merck Data & AI Organization, to contributing to data science and engineering projects within one of our Sector Data Offices or playing an active part in developing and managing products within the business. After a first assignment at your home base (Bangalore, India) you will have three more assignments in various parts of the company, with one of them being a 3 to 6 months assignment in our headquarter in Darmstadt, Germany. You will be exposed to various industries, technologies, and applications, but also gain valuable insights into different cultures and learn to effectively collaborate in a diverse and global organization. Depending on our business needs and your preference, you&aposll get the chance to expand your network and gain in-depth knowledge in the day-to-day operations. As Data & AI GOglobal Trainee you are the driver for your personal development and can decide on your target role: Do you prefer to become a data scientist and support our business with your machine learning and statistics expertise Would you like to utilize your expertise in machine learning and data analysis as an ML engineer, or do you prefer to record and implement requirements as a product owner Whichever role you choose, you will support our teams to deliver data & AI projects and products to solve business problems. During your assignments You will learn about the Groups Data & AI Strategy, Data Culture, Data Governance and support our products and projects in the Merck Data & AI Organization You will apply and strengthen your data & AI skills by contributing to products and projects within Sector Data Offices, ranging from Manufacturing, Supply Chain, R&D to Marketing and Sales You will expand your network globally by meeting many other data & AI enthusiasts across Merck During this program we will develop your personal and professional growth and provide you with great networking opportunities to continuously improve your data & AI skills and business acumen. In addition to your assignment managers who guide you within your rotations, you will be supported by a mentor. Who You Are Masters degree or PhD in Data Science, Computer Science, Mathematics, (Industrial) Engineering, Computational Chemistry, Physics or a similar field Programming experience is mandatory, e.g. with Python, R, Matlab, Java, C++ Prior experience with Palantir Foundry or AWS (e.g. EC2, Sagemaker, S3) is beneficial Ideally two data & AI related internships & first international experience Strong interest in AI, machine learning, data engineering and analysis A strong verbal and written communicator, with fluency in English (other languages are a plus) Strong analytical skills, ability to think outside the box and a pragmatic way of working Ability to work with people from various backgrounds/cultures Comfortable reporting results to both technical and non-technical audiences Strong interpersonal skills with the desire to share knowledge and skills What we offer: We are curious minds that come from a broad range of backgrounds, perspectives, and life experiences. We believe that this variety drives excellence and innovation, strengthening our ability to lead in science and technology. We are committed to creating access and opportunities for all to develop and grow at your own pace. Join us in building a culture of inclusion and belonging that impacts millions and empowers everyone to work their magic and champion human progress! Apply now and become a part of a team that is dedicated to Sparking Discovery and Elevating Humanity! Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Work Your Magic with us! Ready to explore, break barriers, and discover more We know youve got big plans so do we! Our colleagues across the globe love innovating with science and technology to enrich peoples lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet. That&aposs why we are always looking for curious minds that see themselves imagining the unimaginable with us. Trainee GOglobal Data & AI Program 2 Years Program Work Location: Bangalore, India + an international 6-month assignment at our headquarter in Darmstadt, Germany Your Role During this 24-months program you will be taking an active part in our data & AI-driven, innovative, customers and patients-oriented organization. Potential assignments vary from learning about the groupwide strategic approach and technological backbone in the Data & AI Organization, to contributing to data science and engineering projects within one of our Sector Data Offices or playing an active part in developing and managing products within the business. After a first assignment at your home base (Bangalore, India) you will have three more assignments in various parts of the company, with one of them being a 3 to 6 months assignment in our headquarter in Darmstadt, Germany. You will be exposed to various industries, technologies, and applications, but also gain valuable insights into different cultures and learn to effectively collaborate in a diverse and global organization. Depending on our business needs and your preference, you&aposll get the chance to expand your network and gain in-depth knowledge in the day-to-day operations. As Data & AI GOglobal Trainee you are the driver for your personal development and can decide on your target role: Do you prefer to become a data scientist and support our business with your machine learning and statistics expertise Would you like to utilize your expertise in machine learning and data analysis as an ML engineer, or do you prefer to record and implement requirements as a product owner Whichever role you choose, you will support our teams to deliver data & AI projects and products to solve business problems. During your assignments You will learn about the Groups Data & AI Strategy, Data Culture, Data Governance and support our products and projects in the Data & AI Organization You will apply and strengthen your data & AI skills by contributing to products and projects within Sector Data Offices, ranging from Manufacturing, Supply Chain, R&D to Marketing and Sales You will expand your network globally by meeting many other data & AI enthusiasts across the org During this program we will develop your personal and professional growth and provide you with great networking opportunities to continuously improve your data & AI skills and business acumen. In addition to your assignment managers who guide you within your rotations, you will be supported by a mentor. Who You Are Masters degree or PhD in Data Science, Computer Science, Mathematics, (Industrial) Engineering, Computational Chemistry, Physics or a similar field Programming experience is mandatory, e.g. with Python, R, Matlab, Java, C++ Prior experience with Palantir Foundry or AWS (e.g. EC2, Sagemaker, S3) is beneficial Ideally two data & AI related internships & first international experience Strong interest in AI, machine learning, data engineering and analysis A strong verbal and written communicator, with fluency in English (other languages are a plus) Strong analytical skills, ability to think outside the box and a pragmatic way of working Ability to work with people from various backgrounds/cultures Comfortable reporting results to both technical and non-technical audiences Strong interpersonal skills with the desire to share knowledge and skills What we offer: We are curious minds that come from a broad range of backgrounds, perspectives, and life experiences. We believe that this variety drives excellence and innovation, strengthening our ability to lead in science and technology. We are committed to creating access and opportunities for all to develop and grow at your own pace. Join us in building a culture of inclusion and belonging that impacts millions and empowers everyone to work their magic and champion human progress! Apply now and become a part of a team that is dedicated to Sparking Discovery and Elevating Humanity! Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

GDS (Group Data Services) leads Swiss Re's ambition to be a truly data-driven risk knowledge company. GDS bring expertise and experience covering all aspects of data and analytics to enable Swiss Re in its vision to make the world more resilient. The Data Platform Engineering team deals with data & analytics platforms for enabling the creation of innovative solutions to data driven business needs. It also enables Swiss Re Group to efficiently utilize the platforms and ensures their availability and proper functioning. The Opportunity Are you excited about the prospect of joining Swiss Re's mission to become a truly data-driven risk company We are the GDS Machine Learning Operations team, and we are looking for a highly skilled and motivated Machine Learning Engineer to join us. In this role, you will be instrumental in building, optimizing, and maintaining our machine learning models. You will collaborate closely with our business groups to anticipate emerging client needs, respond to user inquiries, and resolve their issues. Your expertise will be crucial in empowering our users to adopt MLOps and LLMOps best practices, all while ensuring our systems remain both secure and cost-effective. As our new colleague, you will thrive in an agile environment, collaborating closely with peers, internal experts, and business clients to support, organize, and manage various activities within the team. Your contributions will be key to driving our data-driven initiatives forward and enhancing our risk management capabilities. What you will work on during your first year at Swiss Re: Key Responsibilities: . Model Development and Maintenance: Design, develop, test deploy, retrain machine learning models and algorithms to solve complex business problems in collaboration with data scientists and other engineers. . Data Processing: Implement big data processing workflows and pipelines to handle large-scale datasets efficiently. . MLOps & LLMOps: Promote and implement MLOps and LLMOps best practices to streamline model deployment, monitoring, and maintenance. . Platform Management : Maintain and enhance our data science and machine learning platforms to ensure high performance and reliability. . Collaboration: Work closely with business stakeholders to understand their needs, provide insights, and deliver tailored ML solutions. . Security & Cost Management: Ensure that all systems and solutions are secure and cost-effective. About You: Essentials The following list represents our ideal candidate's profile. We know it is unlikely you meet 100% of our criteria. The more boxes you can check the better, however ultimately willingness to keep learning is key. Educational Background: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or a related field preferred. Experience: 1 + years of proven experience in machine learning model development, deployment, and maintenance. Some experience with large language models is a plus. Technical Skills: Proficiency in Python, R , or similar programming languages. Experience with MLframeworks like TensorFlow, PyTorch, or Scikit-learn. Big Data Technologies: Familiarity with big data processing tools esp. Spark, or similar. MLOps & LLMOps: Knowledge of MLOps and LLMOps practices and tools such as Docker, Kubernetes, MLflow, etc. Palantir Foundry: Willingness to accustom yourself with Swiss Re's strategic data management platformPalantir Foundry. Prior experience with the platform is a plus. Analytical Skills: Strong analytical and problem-solving skills with the ability to work with complex datasets. Communication: Excellent communication skills to interact effectively with both technical and non-technical stakeholders. Ideally you have some prior experience with the (re) insurance or financial services industry Behavioural Competences We are working as one team operating from three different countries: India, Slovakia and Switzerland. Our company's internal customer base is spread across all continents. Given, that you will interact with users every day, a strong customer-orientation, joy in communicating with users as well as a commitment to quality and timeliness are important! Furthermore, the entire field of technologies we are working with such as cloud technologies, Generative AI or Natural Language Processing is evolving rapidly. You should have the curiosity to explore modern technologies, and the persistence to make them accessible to a larger community even while they are still evolving. Last, but not least, enjoying our work is just as important as achieving results. Keywords: Reference Code: 135224

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 9 Lacs

hyderabad, chennai, bengaluru

Hybrid

Greeting from the LTIMindtree We are looking for the Palantir developer Please fill the below form for further step https://forms.office.com/r/iZHbVEkEcU

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

nashik, maharashtra

On-site

As a Senior AI Business Analyst, you will be responsible for leading discovery workshops and end-user interviews to uncover and document true customer needs. You will collaborate with cross-functional teams to gather, document, and analyze business requirements for AI projects. Your role will involve creating and maintaining key artifacts such as Empathy Maps and Use Personas. Through close business consultation, you will own the clarity and direction of specific analytical investigations, projects, and requests for substantial change while keeping the wider strategy in mind. You will balance an innovative mindset with technical considerations, working with the team to build feasibility prototypes that can be efficiently leveraged for wider scaling and impact. In addition, you will provide business end-user support, training, create training materials, and manage change in relation to understanding and intercepting key principals using a user-centric design approach. You will translate unstructured business problems into workable and valuable solutions, effectively communicating results of complex models to both senior and non-technical stakeholders. Furthermore, you will respond to business requests for ad-hoc analysis and higher analytics, owning the design, development, and maintenance of ongoing analysis to drive intelligent, well-informed business decisions. You will assist in the testing and validation of AI solutions to ensure they meet business needs and specifications. Your role will also involve supporting Product Owners within the business to interpret findings and create a robust and living Solution Backlog presented as User Stories. You will drive AI strategy internationally by providing users across the business with applications, examples, coaching, and KPIs to empower our ability to be agile and customer-centric. To qualify for this position, you should have at least 5 years of relevant experience in Business Analysis with large corporations. A degree in a quantitative field such as Computer Science, Statistics, Mathematics, or related fields is required, and higher education such as a master's or PhD is a plus. Demonstrable experience in methodologies including Design Thinking, Six Sigma, and Human-Centered Design is essential. Proficiency in requirement gathering, documentation, and project management tools is expected. Possessing a CBPP from the ABPMP or similar qualification is highly advantageous. Experience in Life Sciences, Healthcare & Pharmaceuticals is highly advantageous. Proficiency in reading programming languages, specifically Pyspark/Python and/or R, is required, with higher proficiency being advantageous. Advanced modeling skills with experience in building linear dimensional, regression-based, and logistic models are necessary. Experience with Palantir Foundry & AIP is advantageous, and familiarity with agile methodologies is a plus.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for the Palantir Foundry Specialist position at UST in Pune should have at least 3 years of experience in procurement analytics, supply chain, or data engineering. You should possess expertise in utilizing Palantir Foundry tools such as Pipelines, Ontologies, Workshop, and Quiver. Proficiency in SQL, Python, and Spark for data processing and transformation is required. Additionally, you should have experience working with ERP systems like SAP, Oracle, and Coupa, as well as a deep understanding of procurement data structures. Knowledge of spend analysis, supplier risk management, and cost optimization strategies is essential. You should be able to develop interactive dashboards and reports for business users. The role demands strong problem-solving abilities, analytical thinking, and effective stakeholder management skills. The position offers a hybrid work mode in Pune, providing an exciting opportunity to contribute to UST's innovative projects.,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

10 - 18 Lacs

pune

Work from Office

Summary We are seeking an experienced Data Engineer (5years of relevant experience) with a strong technical background to design, develop, and support cutting-edge data-driven solutions. The ideal candidate will bring expertise in data engineering, cloud technologies, and enterprise-scale solution development, along with exceptional problem-solving, communication, and leadership skills. Key Responsibilities: Design and implement data engineering pipelines using PySpark, Palantir Foundry and Python. Leverage cloud technologies (AWS) to build scalable and secure solutions. Ensure compliance with architecture best practices and develop enterprise-scale solutions. Develop data-driven applications and ensure robust performance across all platforms. Collaborate with technology and business stakeholders to deliver innovative solutions aligned with organizational goals. Provide operational support for applications, systems, and infrastructure. Qualifications: 5 years of experience in data engineering and ETL ecosystems with PySpark, Palantir Foundry and Python. Hands-on experience with cloud platforms (AWS) and related technologies. Strong understanding of enterprise architecture and scalable solution development. Background in enterprise system solutions with a focus on versatility and reliability. Exceptional analytical, problem-solving, and communication skills, with a collaborative work ethics. Preferred Skills: Experience in the utility domain is a strong plus. Experience with data-driven applications in high-scale environments. In-depth knowledge of industry trends and emerging technologies in the data engineering landscape. Ability to adapt to evolving challenges in dynamic work environments.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description: Data Engineer I (F Band) About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 135160

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About the Team The Data & Analytics team, part of the Finance Data & Analytics Engineering product area, is dedicated to delivering performant, scalable, and maintainable solutions that empower stakeholders to extract actionable insights from large and diverse financial datasets. Our solutions support financial reconciliation, steering, and auditing, leveraging platforms such as SAP S/4 HANA, Microsoft Azure, and Palantir Foundry. As an expert in this team, you will be responsible for the full lifecycle of solution development-from analysis to maintenance-ensuring fast, reliable, and insightful reporting capabilities. About the Role We are seeking a seasoned professional with a strong background in business-facing roles and technical expertise for the SwissRe cloud transformation project. This role offers a unique opportunity to work in a complex and dynamic finance landscape. You'll design and deliver solutions for IFRS reporting to a big finance department that will challenge your problem solving and solutioning skills. You'll be using the cutting-edge technology platforms of Azure Databricks, Palantir Foundry and SAP S/4 HANA Embedded Analytics. Key Responsibilities Architect and implement analytics and data modeling solutions for Microsoft Azure Databricks, Palantir, and SAP landscape. Develop and enhance analytical applications and reporting tools. Collaborate with business stakeholders (business analysts, product owners, architects) to gather requirements, design functional solutions, and deliver end-to-end analytics products independently. Lead the development of prototypes and ensure timely delivery of production-ready solutions. Mentor and guide junior engineers, fostering a culture of technical excellence and continuous learning. Contribute to best practices, coding standards, and architectural guidelines. About You You are a passionate and experienced application engineer with deep expertise in ETL and analytics technologies. You thrive in collaborative environments, enjoy solving complex data challenges, and are committed to delivering high-quality, scalable solutions. You are also a strong communicator who can translate technical concepts into business value. Requirements - Experience The ideal candidate should have expertise in cloud data warehousing / Lakehouse solutions (Azure Databricks or Palantir) with exposure and understanding of SAP tables and structures. The ability to work in an agile environment with an understanding of methods and tools (ADO/JIRA) is important. 10+ years of hands-on experience in designing and implementing ETL for large data solutions and reporting solutions using Microsoft Azure/Databricks or Palantir. Proficiency in building scalable transformations using Python (Py Spark) and SQL. Understanding performance impact and resolving performance related issues is a key requirement. Exposure to SAP tables/structures and working knowledge of ABAP Objects, REST APIs, OO ABAP, BAPIs, and SQL. Experience with CI/CD pipelines and DevOps practices in data engineering. Proven track record across all phases of cloud application development: requirements gathering, architecture, implementation, Maintenance, testing, and documentation. Strong analytical thinking and ability to communicate complex processes clearly to non-technical stakeholders. Bachelor's degree in computer science, Information Technology, or related field. Experience in financial services and insurance Industry is preferred. Excellent verbal and written communication skills experience working in agile environments. Good to Have Understanding S/4 HANA Embedded Analytics will be a value add. Exposure to financial reconciliation and auditing processes. Familiarity with data governance and compliance standards in finance. Keywords: Reference Code: 135069

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

hyderabad, telangana, india

On-site

About the Role: As a Data Engineer, you will be responsible for designing and delivering best in class data engineering capabilities supporting the company's ambition to become data driven. You will have strong skills in architecting, designing, building and testing data management solutions within a big data cloud infrastructure. You have a real passion for truly democratizing data for the business and our customers. Key responsibilities include: Work closely with Product Owners and Architects to understand requirements, formulate solutions and evaluate the implementation effort. Design, develop and maintain scalable data transformation pipelines. Design and implement analytics models and visualizations to provide actionable data insights. Evaluate new capabilities of the analytics platform, develop prototypes and assist in drawing conclusions about the applicability to our solution landscape. Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 3-5 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Knowledge of Insurance Domain, Financial Industry or Finance function in other industries is a strong plus Ability and enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134827

Posted 2 weeks ago

Apply

12.0 - 14.0 years

0 Lacs

bengaluru, karnataka, india

On-site

As a Senior Data Engineer, you will be responsible for designing and implementing complex data pipelines and analytics solutions to support key decision-making business processes in our Property & Casualty business domain. You will gain exposure to a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re Property & Casualty. You will be expected to take end-to-end ownership of deliverables, gaining a full understanding of the Property & Casualty data and the business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Architects to understand requirements, formulate solutions, and evaluate the implementation effort. Design, develop and maintain scalable data transformation pipelines. Data model Design, Data architecture implementation Working with Palantir platform for implementation Evaluate new capabilities of the analytics platform, develop prototypes and assist in drawing Development of single source of truth about our application landscape. Collaborate within a global development team to design and deliver solutions Assist stakeholders with data-related functional and technical issues Working with data governance platform for data management and stewardship. About the Team: This position is part of the Property & Casualty Data Integration and Analytics project within the Reinsurance Data office team under Data & Foundation. We are part of a global strategic initiative to make better use of our Property & Casualty data and to enhance our ability to make data driven decisions across the Property & Casualty reinsurance value chain. About You: You enjoy the challenge of solving complex big data analytics problems using state-of-the-art technologies as part of a growing global team of data engineering professionals. You are a self-starter with strong problem-solving skills and capable of owning and implementing solutions from start to finish. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 12 years of experience working with large scale software systems At least 6 years of experience in Pyspark and Proficient in designing Large Scale Data Engineering solutions Minimum of 2 years of experience with Palantir Foundry, including familiarity with tools such as code repositories and Workshop. Proficient in SQL (Spark SQL preferred) Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with TypeScript/JavaScript/HTML/CSS a plus Knowledge of data management fundamentals and data warehousing principals Demonstrated strength in data modelling, ETL and storage/Data Lake development Experience with Scrum/Agile development methodologies Knowledge of Insurance Domain, Financial Industry or Finance function in other industries is a strong plus Experienced in working with a diverse multi-location team of internal and external professionals Strong analytical and problem-solving skills Self-starter with a positive attitude and a willingness to learn Ability to manage own workload self-directed Ability and enthusiasm to work in a global and multicultural environment Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 133973

Posted 2 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies