Jobs
Interviews

34 Palantir Foundry Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 2 to 4 years (2 years relevant) Education Qualification Bachelor&aposs degree in computer science, data science or any other Engineering discipline. Masters degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development + 11 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Requirements: 3 years of experience in software development B.Sc. or higher degree in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields Strong communication skills User-centric thinking Good understanding of UX Good intuition for design Technical Skills: Proficiency in Typescript/JavaScript (e.g., on NodeJS) Proficiency in HTML/CSS and design systems like Bootstrap Proficiency in building responsive desktop and mobile applications. Expertise developing and deploying applications on popular cloud platforms (AWS or Azure) is a plus. Experience in React/Next.js is a plus. Experience with analytics software (e.g., Power BI / Tableau) is a huge plus Experience with Spark / Python is a plus Job Description: Position Title: Frontend Engineer/Application Engineer As a Frontend Engineer in the Analytics Center of Excellence (ACE) at Client, you will be working in close collaboration with a global team of product owners, solution architects, and data engineers on state-of-the-art data applications on our UPTIMIZE Platform (Palantir Foundry, AWS, Tableau, Snowflake). Our goal is to produce D&A products that allow for data-driven decisions and excite our users. With our applications, we strive to activate our data and analytics to allow for closed-loop operations. At Client, we value curiosity and believe it is essential for our Frontend Developers to be able to generate creative ideas for our groundbreaking applications. You should have a good understanding of UX to independently come up with designs that support specific operations/functionality or fill data visualization with life. In our day-to-day operation, we use Palantir Foundry and implement and maintain applications in Workshop and Slate (mainly low-code environment). We want to also build applications on AWS where your skills will be helpful. You should be eager to try new things out on our platform and test the boundaries of whats possible. Since we specialize in data applications, experience in data engineering or experience in closely collaborating with data engineers is a plus. Responsibilities: Collaborate with a global team of product owners, solution architects, and data engineers to design, develop, test, and maintain state-of-the-art data applications in Palantir Foundry Generate creative and innovative ideas for developing intuitive and visually appealing interfaces and independently bring them to fruition. Develop and maintain user-friendly applications using Typescript/JavaScript Ensure code quality with testing and code review processes using version control systems like GIT. Continuously evaluate and improve the user experience of our applications. Requirements: 3 years of experience in software development B.Sc. or higher degree in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields Strong communication skills User-centric thinking Good understanding of UX Good intuition for design Show more Show less

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Do you have in-depth experience in Nat Cat models and tools Do you enjoy being part of a distributed team of Cat Model specialists with diverse backgrounds, educations, and skills Are you passionate about researching, debugging issues, and developing tools from scratch We are seeking a curious individual to join our NatCat infrastructure development team. As a Cat Model Specialist, you will collaborate with the Cat Perils Cat & Geo Modelling team to maintain models, tools, and applications used in the NatCat costing process. Your responsibilities will include supporting model developers in validating their models, building concepts and tools for exposure reporting, and assisting in model maintenance and validation. You will be part of the Cat & Geo Modelling team based in Zurich and Bangalore, which specializes in natural science, engineering, and statistics. The team is responsible for Swiss Re's global natural catastrophe risk assessment and focuses on advancing innovative probabilistic and proprietary modelling technology for earthquakes, windstorm, and flood hazards. Main Tasks/Activities/Responsibilities: - Conceptualize and build NatCat applications using sophisticated analytical technologies - Collaborate with model developers to implement and test models in the internal framework - Develop and implement concepts to enhance the internal modelling framework - Coordinate with various teams for successful model and tool releases - Provide user support on model and tools related issues - Install and maintain the Oasis setup and contribute to the development of new functionality - Coordinate platform setup and maintenance with 3rd party vendors About You: - Graduate or Post-Graduate degree in mathematics, engineering, computer science, or equivalent quantitative training - Minimum 5 years of experience in the Cat Modelling domain - Reliable, committed, hands-on, with experience in Nat Cat modelling - Previous experience with catastrophe models or exposure reporting tools is a plus - Strong programming skills in MATLAB, MS SQL, Python, Pyspark, R - Experience in consuming WCF/RESTful services - Knowledge of Business Intelligence, reporting, and data analysis solutions - Experience in agile development environment is beneficial - Familiarity with Azure services like Storage, Data Factory, Synapse, and Databricks - Good interpersonal skills, self-driven, and ability to work in a global team - Strong analytical and problem-solving skills About Swiss Re: Swiss Re is a leading provider of reinsurance, insurance, and insurance-based risk transfer solutions. With over 14,000 employees worldwide, we anticipate and manage various risks to make the world more resilient. We cover a wide range of risks from natural catastrophes to cybercrime, offering solutions in both Property & Casualty and Life & Health sectors. If you are an experienced professional returning to the workforce after a career break, we welcome you to apply for positions that match your skills and experience.,

Posted 3 days ago

Apply

9.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

As an experienced professional in software engineering, data architecture, or AI/ML with over 9 years of relevant experience, you will be responsible for architecting enterprise-grade solutions using Palantir Foundry and AIP. Your role will involve leading AI application development, including agentic AI for business process automation. You will own the end-to-end solution lifecycle, encompassing design, development, deployment, and production support. It will be crucial for you to define DevOps and platform engineering standards for Foundry deployments, guiding data governance, security, and CI/CD automation across teams. Collaboration with global teams to build scalable frameworks and reusable templates will be a key aspect of your responsibilities. You will also lead environment governance, versioning strategy, and platform upgrade planning. Acting as a technical advisor to stakeholders, you will translate complex requirements into actionable solutions and drive innovation by integrating emerging AI/ML capabilities into Foundry workflows. Your proficiency in Foundry tools such as Ontology Manager, Pipeline Builder, Code Workbook, and Contour, along with advanced knowledge of Palantir AIP, GenAI, and LLM integrations, will be essential for success in this role. Experience in managing production environments, observability tools, GitOps, CI/CD automation, branching strategies, and proficiency in programming languages such as Python, Java, TypeScript, or C++ will be required. A strong foundation in SQL, Spark, PySpark, and data modeling, as well as familiarity with cloud platforms like AWS, Azure, GCP, and DevOps practices, will be beneficial. Excellent leadership, communication skills, and stakeholder engagement are essential qualities for this position. Preferred qualifications include Palantir certifications (Foundry Basics, Developer Track), experience mentoring teams, leading agile delivery, knowledge of DeVOps, data lineage, and automated deployments, and a background in platform engineering, enterprise architecture, or solution consulting.,

Posted 4 days ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Hyderabad

Hybrid

Key Skills: . NET Core, C#, Azure Kubernetes Service (AKS), Databricks, Delta Lake, Spark, Data Lake, Palantir Foundry, GenAI, CI/CD, Agile, ITSM, SaaS, backend development, and security remediation. Roles & Responsibilities: Design, build, and enhance P&C solutions technology architecture and engineering. Recommend and implement alternative solutions to business challenges to streamline processes and create competitive advantage. Prioritize efforts based on business benefits and drive execution to ensure tangible outcomes. Collaborate with Engineering Director, P&C Solutions Engineering Leads, Product Owners, Technology Platform Leads, and Operations teams. Lead engineering for specific products and manage internal and external engineering team members. Coach and mentor junior team members across the organization. Promote knowledge sharing across P&C solutions engineering teams and align best practices within Reinsurance. Communicate ideas and plans to leadership teams and boards as required. Experience Requirement: 6-10 years of experience in backend software development using .NET Core / C#. Hands-on experience with Azure Kubernetes Service (AKS) or similar container orchestration platforms. Strong experience with Databricks, Data Lake, Delta Lake, and Spark-based workloads. Experience with Palantir Foundry or equivalent analytics/data platforms. Background in implementing CI/CD pipelines, unit testing, and backend development best practices. Familiarity with GenAI capabilities, and experience in technical feasibility studies. Exposure to agile methodologies, cross-cultural collaboration, and ITSM Level 3 SaaS applications. Understanding of security vulnerabilities and experience in remediation within defined SLAs. Knowledge of the insurance/reinsurance domain is a plus. Education: Any Post Graduation, Any Graduation.

Posted 1 week ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Gurgaon, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm's growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: Palantir Responsibilities JD Palantir Foundry Client-frontline Solution Architect 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Minimum Education: Bachelor's degree in computer science, data science or any other Engineering discipline. Master's degree is a plus. Mandatory Skill Sets Palantir Preferred Skill Sets Palantir Years Of Experience Required 4+ Education Qualification BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software + 16 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No

Posted 1 week ago

Apply

4.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled professional with over 15 years of experience and strong expertise in Python Programming, Pyspark queries, AWS, GIS, and Palantir Foundry. Your primary responsibilities will include developing and enhancing data-processing, orchestration, and monitoring using popular open-source software, AWS, and GitLab automation. You will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will be responsible for identifying, designing, and implementing process improvements, automating manual processes, optimizing usability, and redesigning for greater scalability. Your role will also involve providing technical support and usage guidance to the users of the platform's services. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into production services. To be successful in this position, you should have experience building and optimizing data pipelines in a distributed environment, working with cross-functional teams, and proficiency in a Linux environment. You must have at least 4 years of advanced working knowledge of SQL, Python, and PySpark queries. Knowledge of Palantir and experience with tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline is highly desirable. Additionally, experience with platform monitoring and alerting tools will be beneficial for this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You are required to be very strong in Python Programming, Pyspark queries, AWS, GIS, and Palantir Foundry. Your role involves leading data engineering activities on moderate to complex data and analytics-centric problems that have a broad impact and require in-depth analysis to achieve desired results. You will be responsible for assembling, enhancing, maintaining, and optimizing current data to enable cost savings and meet project or enterprise maturity objectives. Your skill set should include advanced working knowledge of SQL, Python, and PySpark. Experience with tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline is essential. Additionally, familiarity with platform monitoring and alerts tools is required. You will collaborate closely with Subject Matter Experts (SMEs) to design and develop Foundry front-end applications with the ontology (data model) and data pipelines that support these applications. Implementing data transformations to derive new datasets or creating Foundry Ontology Objects necessary for business applications will be part of your responsibilities. Your tasks will also involve implementing operational applications using Foundry Tools such as Workshop, Map, and/or Slate. Active participation in agile/scrum ceremonies like stand-ups, planning, retrospectives, etc., is expected from you. It is essential to create and maintain documentation describing the data catalog and data objects. As the usage of applications grows and requirements change, you will be required to maintain these applications effectively.,

Posted 2 weeks ago

Apply

4.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You have a great opportunity to join our team as a Senior Data Engineer with a strong focus on Python Programming, Pyspark queries, AWS, GIS, and Palantir Foundry. With over 15 years of experience in the field, you will play a crucial role in developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Your collaboration with product and technology teams will be essential in designing and validating the capabilities of the data platform. You will be responsible for identifying, designing, and implementing process improvements, automating manual processes, optimizing for usability, and re-designing for greater scalability. Providing technical support and usage guidance to the users of our platforms services will be a key aspect of your role. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. To excel in this role, you should have experience building and optimizing data pipelines in a distributed environment. Your ability to support and work with cross-functional teams will be vital. Proficiency in working in a Linux environment is a must, along with 4+ years of advanced working knowledge of SQL, Python, and PySpark. Knowledge of PySpark queries is a mandatory requirement. Familiarity with Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be beneficial. Experience with platform monitoring and alerts tools will also be considered a plus.,

Posted 2 weeks ago

Apply

4.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You should be Strong in Python Programming and PySpark queries with experience in AWS, GIS, and Palantir Foundry. With over 15 years of experience, you will be based in Hyderabad and work in the office 5 days a week. Your responsibilities will include developing and enhancing data-processing, orchestration, and monitoring using open-source software, AWS, and GitLab automation. You will collaborate with product and technology teams to design and validate data platform capabilities. Additionally, you will identify, design, and implement process improvements, provide technical support to platform users, and drive the creation of metrics and monitoring mechanisms for production services. To qualify for this role, you should have experience in building and optimizing data pipelines in a distributed environment, working with cross-functional teams, and proficiency in a Linux environment. You should also have at least 4 years of advanced knowledge in SQL, Python, and PySpark, along with knowledge of Palantir. Experience with tools like Git/Bitbucket, Jenkins/Code Build, Code Pipeline, and platform monitoring tools is also required.,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

13 - 18 Lacs

Hyderabad

Work from Office

4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST Experience in Palantir

Posted 2 weeks ago

Apply

2.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled and hands-on Palantir Tech Lead with over 15 years of total IT experience, including at least 1.52 years of recent experience in Palantir Foundry. You will be joining our team in Hyderabad for an onsite, leadership role that requires technical depth, project ownership, and cross-functional collaboration. The project is scheduled for a 6+ month contract with the possibility of extension. Your key responsibilities will include leading data engineering efforts on strategic data initiatives, collaborating with business SMEs to design and build front-end applications using Palantir Foundry tools, implementing and maintaining Palantir Ontology Objects, data pipelines, and data transformations, building scalable data workflows using SQL, Python, and PySpark, managing CI/CD tools, monitoring platform performance, participating in Agile/Scrum ceremonies, creating and maintaining comprehensive documentation, adapting applications to evolving business needs, mentoring junior engineers, and effectively communicating technical ideas to non-technical stakeholders. To be successful in this role, you must possess strong expertise in Python and PySpark, deep experience in data engineering and large-scale data pipeline development, familiarity with CI/CD tools like Git, Jenkins, and CodePipeline, experience with monitoring, alerts, and performance tuning of platforms, strong communication and stakeholder management skills, and the ability to work full-time onsite in Hyderabad. Preferred traits include prior experience in building enterprise-scale analytics applications using Palantir, exposure to Ontology-driven design in Foundry, adaptability, a proactive approach to problem-solving, and a passion for mentoring and growing engineering teams.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

OneDose is revolutionizing medication management through the utilization of advanced AI and data-driven solutions. The primary objective is to enhance the intelligence, safety, and accessibility of every dose on a large scale. Patients often face challenges such as cost constraints, availability issues, or allergies, resulting in missed medications. Addressing this multifaceted clinical and supply chain dilemma necessitates seamless data integration, real-time intelligence, and precise recommendations. The responsibilities include integrating formulary data, supplier inventories, salt compositions, and clinical guidelines into a unified ontology. Moreover, developing a clinical decision support system that offers automated suggestions and deploying real-time recommendation pipelines using Foundry's Code Repositories and Contour (ML orchestration layer). The role of Palantir Foundry Developer is a full-time, on-site position based in Jaipur. The key responsibilities involve constructing and managing data integration pipelines, creating analytical models, and enhancing data workflows using Palantir Foundry. Daily tasks encompass collaborating with diverse teams, troubleshooting data-related issues, and ensuring data quality and adherence to industry standards. The ideal candidate should possess profound expertise in Palantir Foundry ranging from data integration to operational app deployment. Demonstrated experience in constructing data ontologies, data pipelines (PySpark, Python), and production-grade ML workflows is essential. A solid grasp of clinical or healthcare data (medication data, EHRs, or pharmacy systems) is highly advantageous. Additionally, the ability to design scalable, secure, and compliant data solutions for highly regulated environments is crucial. A strong passion for addressing impactful healthcare challenges through advanced technology is desired. A Bachelor's degree in Computer Science, Data Science, or a related field is required. Joining OneDose offers the opportunity to make a significant impact by enhancing medication accessibility and patient outcomes in India and globally. You will work with cutting-edge technologies like Palantir Foundry, advanced AI models, and scalable cloud-native architectures. The work environment promotes ownership, growth, innovation, and leadership, enabling you to contribute to shaping the future of healthcare.,

Posted 2 weeks ago

Apply

2.0 - 15.0 years

0 - 0 Lacs

hyderabad, telangana

On-site

You are a highly skilled and hands-on Palantir Tech Lead with over 15 years of overall IT experience, including at least 1.5 to 2 years of recent experience working on Palantir Foundry. Your expertise in Python and PySpark is strong, and you have deep experience in data engineering and large-scale data pipeline development. As a key member of our team in Hyderabad, you will lead data engineering efforts on strategic data initiatives, collaborating with business SMEs to design and build front-end applications using Palantir Foundry tools. Your responsibilities will include implementing and maintaining Palantir Ontology Objects, data pipelines, and data transformations, as well as utilizing advanced knowledge of SQL, Python, and PySpark to build scalable data workflows. You will be managing CI/CD tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline, and monitoring platform performance using alerts and monitoring tools. Active participation in Agile/Scrum ceremonies, creating comprehensive documentation for data catalogs and application workflows, adapting and maintaining applications to meet evolving business needs, and mentoring junior engineers will be part of your role. Additionally, you will be communicating complex technical ideas clearly to non-technical stakeholders and business leaders. The ideal candidate will possess strong communication and stakeholder management skills, with the ability to work full-time onsite in Hyderabad. Preferred traits include prior experience in building enterprise-scale analytics applications using Palantir, exposure to Ontology-driven design in Foundry, adaptability, a proactive approach to problem-solving, and a passion for mentoring and growing engineering teams. This is an onsite, leadership role with a 6+ months contract that is extendible, offering a budget of 32 - 36 LPA.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

3 - 8 Lacs

Bengaluru

Work from Office

" "Role & responsibilities Bachelors degree in Statistics, Data Analytics, Computer Science or related field - Proficient in MS Office, SAP Business Objects, Tableau, CRM (Salesforce.com), Palantir, KNIME or similar - Proficient in SQL, R, Python, PySpark, VBA - Proficient in Excel functions such as complex formulas, macros, Power Pivots - Proficient working with extracting, formatting, validating and analyzing large data sets - Experience using big data analytics applications and programs such as R preferred - Good analytical and problem-solving skills. - Good communication (phone, e-mails, face to face) and interpersonal skills. - Business acumen and supportive mindset - Ability to thrive in a complex matrix environment - Good practice in prioritization and focusing on key tasks - A sound knowledge of written and spoken English - Experience: Analyst 1-3 years, Sr Analyst 3-5 years in the life science, pharmaceutical or biotech industry preferred - Proficient in self-organization, proactive time management, structured planning & execution, customer orientation Responsibilities: - Work on Tableau adhoc sales reports - Creation of Tableau Dashboards and maintenance supporting Sales Rep and Managers - Reporting Mailbox management and SFDC chatters for handling user queries - Acting as the key contact point for Sales Management for all sales territory and commission related questions - identify missing sales, resolve sales recognition issues, manage territory changes and account assignments, as well as crediting adjustments - Maintaining territory alignment files, customer and account assignments in Palantir - Working directly with Sales Mgmt. to support them in data analytics topics - Usage of modern analytics methods to support the development of fact driven mgmt. strategies - Supporting international internal customers with data analysis and reporting. - Adhere to TAT and Quality for all the process - Work on adhoc requests or additional responsibilities when asked/requested. - Sr Analyst : Create insightful reporting and dashboards for the commercial org. to increase salesforce efficiencies. Coordinate with internal and external stakeholders as necessary. Support Projects in Palantir & Tableau, maintain and support ISO documents. Development of analytics tools in Tableau, Palantir Big Data environment and SAP BI. Execute assigned projects as needed and recommend and deliver process improvements Preferred candidate profile

Posted 3 weeks ago

Apply

5.0 - 10.0 years

6 - 14 Lacs

Noida, Pune, Bengaluru

Work from Office

Hiring for Palantir Foundry from 4 to 6 Years only - India. Proficiency in Pyspark, in Palantir foundry and its application. Please share your cv to ranjitha@promantusinc.com Regards, Ranjitha 7619598141.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

4 - 9 Lacs

Hyderabad

Hybrid

Role & responsibilities Key Responsibilities: Design, develop, and optimize data pipelines and workflows using Python, Spark , and SQL . Work on data integration and transformation using the Palantir Foundry platform . Collaborate with data engineers, data scientists, and business stakeholders to build scalable and efficient data solutions. Implement reusable code components, automate data processes, and ensure data quality. Provide support for troubleshooting and resolving data-related issues in Foundry. Write clean, maintainable, and efficient code following best practices. Participate in code reviews, sprint planning, and other Agile ceremonies. Mandatory Skills: Python Strong programming and scripting skills SQL – Strong querying and data manipulation Apache Spark – Experience with distributed data processing Palantir Foundry – Good working knowledge and hands-on experience Good understanding of data modeling , ETL , and data pipelines

Posted 1 month ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Role & responsibilities Developing back-end code logic that leverages semantic object linking (ontologies) within Palantir Foundry Pipeline Builder, Code Workbook, and Ontology Manager. Creating servers, databases, and datasets for functionality as needed. Ensuring health of data connections and pipelines (utilizing filesystem, JDBC, SFTP, and webhook). Ensuring conformance with security protocols and markings on sensitive data sets. Ensuring responsiveness of web applications developed on low code/no code solutions. Ensuring cross-platform optimization for mobile phones. Seeing through projects from conception to finished product. Proficiency with fundamental front-end languages such as HTML, CSS, MySQL, Oracle, MongoDB and JavaScript preferred. Proficiency with server-side languages for structured data processing- Python, Py Spark, Java, Apache Spark, and Spark SQL preferred.

Posted 1 month ago

Apply

3.0 - 8.0 years

8 - 18 Lacs

Hyderabad

Work from Office

Hyderabad or Remote Responsibilities: * Design, develop, and maintain Palantir platforms using Foundry/Gotham/Apollo technologies. * Collaborate with cross-functional teams on project delivery and support. Send your resume to tanweer@cymbaltech.com

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Job Summary Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. 6+ Years of experience in data engineering or analytics with 2+ years leading Palantir Foundry/Gotham implementations. Strong understanding of data integration, transformation, and modeling techniques. Proficiency in Python, SQL, and experience with pipeline development using Palantir tools. Understanding of Banking and Financial Services industry. Excellent communication and stakeholder management skills. Experience with Agile project delivery and cross-functional team collaboration.

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role: As a Senior Data Risk Manager , you will play a central role in shaping how Swiss Re identifies, assesses, and governs operational risks linked to data. Sitting in the 2nd Line of Defence, you will provide independent oversight, advise on control effectiveness, and challenge risk-taking decisions related to data use, storage, quality, lineage, and security. You'll also have the opportunity to influence our approach to data-related risks in AI and emerging technologies, helping shape governance practices that extend across a global enterprise. Key Responsibilities: Design and enhance Swiss Re's Data Risk Control Framework by identifying and embedding key controls across the data lifecycle. Challenge and advise 1st Line teams on risk identification, assessment, and control adequacy related to data management and digital processes. Lead risk reviews and thematic assessments across digital services, systems, or strategic technology projects to surface and address data management risks. Monitor implementation of data risk controls across business units and functions, gathering feedback to support continuous improvement. Establish risk reporting and monitoring standards for data management risks at Group level, providing clear risk insights to senior stakeholders. Assess AI-related data risks , ensuring alignment with applicable internal governance and external regulatory frameworks. Engage regularly with senior stakeholders , promoting a strong risk culture and influencing data governance behaviour across the organisation. About the team: The Digital & Technology Risk Management (DTRM) team acts as the 2nd Line of Defence for all digital and technology-related risks at Swiss Re. We provide independent oversight, challenge, and insight across Swiss Re's global digital landscape. Serving as an independent partner to the business, we help shape the Group's risk posture across various technology domains, ranging from infrastructure and application security to digital innovation and AI. Our commitment lies in driving high standards of resilience, informed risk-taking, and sound control practices through strong engagement and credible challenge. From reviewing control frameworks to assessing emerging risks, we help shape responsible innovation and build resilience into every layer of our technology environment. About you: We are looking for a confident and forward-thinking risk professional with a deep understanding of data governance and its associated risks. Experience & Capabilities Minimum 7 years of experience in operational risk, digital/technology risk, or data governance roles-preferably within financial services, reinsurance, or consulting. Familiarity with data lifecycle and records management frameworks (e.g., DAMA-DMBOK) and their practical application across large organisations. Proven experience conducting risk assessments, spot checks , and thematic reviews in a complex, regulated environment. Technical & Tooling Familiarity with data quality assurance techniques , metadata management, and lineage tracking. Proficient in using data governance platforms (e.g., Collibra , Palantir Foundry ) and supporting tools to analyse or visualise data flows and risks. Strong understanding of AI/ML data governance risks and regulatory developments (e.g., GDPR, AI Act, data ethics frameworks). Behavioural & Interpersonal Comfortable working independently , including collaboration with managers or stakeholders in different time zones. Strong stakeholder engagement and communication skills with the ability to influence and challenge at all levels. Demonstrated ability to balance business enablement with effective risk management . Certifications (Desirable) Certified Data Management Professional (CDMP) Certified in Risk and Information Systems Control (CRISC) Other data or risk-related qualifications are a plus Keywords: Reference Code: 134393

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Work from Office

Were Lear for You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better. To know more about Lear please visit our career site: www.lear.com Job Title: Lead Data Engineer Function: Data Engineer Location: Bhosari, Pune Position Focus: As a Lead Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If youre passionate about data engineering and have a track record of excellence, this role is for you! Job Description Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lears data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. Tools and Technologies: Utilize key tools within Palantir Foundry, including: Pipeline Builder: Author data pipelines using a visual interface. Code Repositories: Manage code for data pipeline development. Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organizations needs. Implement optimum build time to ensure effective utilization of resource. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundrys data engineering capabilities. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. analysis and to provide adequate explanation for the monthly, quarterly and yearly analysis. Oversees all accounting procedures and systems and internal controls used in the company. Supports the preparation of budgets and financial reports, including income statements, balance sheets, cash flow analysis, tax returns and reports for Government regulatory agencies. Motivates the immediate reporting staff for better performance and effective service and encourage team spirit. Coordinates with the senior and junior management of other departments as well, as every department in the organization is directly or indirectly associated with the finance department. Education: Bachelors or masters degree in computer science, Engineering, or a related field. Experience: Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Strong understanding of Palantir Foundry and its capabilities. Familiarity with big data technologies (e.g., Spark, Hadoop, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities.

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Description: Data Engineer I (F Band) About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134086

Posted 1 month ago

Apply

5.0 - 10.0 years

1 - 2 Lacs

Hyderabad

Work from Office

3 years of leading experience Strong in #Python programming, #Pyspark queries and #Palantir. Experience using tools such as: #Git/#Bitbucket, #Jenkins/#CodeBuild, #CodePipeline

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134085

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies