Jobs
Interviews

81 Palantir Foundry Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

13 - 18 Lacs

Hyderabad

Work from Office

4+ years of advanced working knowledge of SQL, Python, and PySpark PySpark queries --- MUST Experience in Palantir

Posted 2 months ago

Apply

2.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled and hands-on Palantir Tech Lead with over 15 years of total IT experience, including at least 1.52 years of recent experience in Palantir Foundry. You will be joining our team in Hyderabad for an onsite, leadership role that requires technical depth, project ownership, and cross-functional collaboration. The project is scheduled for a 6+ month contract with the possibility of extension. Your key responsibilities will include leading data engineering efforts on strategic data initiatives, collaborating with business SMEs to design and build front-end applications using Palantir Foundry tools, implementing and maintaining Palantir Ontology Objects, data pipelines, and data transformations, building scalable data workflows using SQL, Python, and PySpark, managing CI/CD tools, monitoring platform performance, participating in Agile/Scrum ceremonies, creating and maintaining comprehensive documentation, adapting applications to evolving business needs, mentoring junior engineers, and effectively communicating technical ideas to non-technical stakeholders. To be successful in this role, you must possess strong expertise in Python and PySpark, deep experience in data engineering and large-scale data pipeline development, familiarity with CI/CD tools like Git, Jenkins, and CodePipeline, experience with monitoring, alerts, and performance tuning of platforms, strong communication and stakeholder management skills, and the ability to work full-time onsite in Hyderabad. Preferred traits include prior experience in building enterprise-scale analytics applications using Palantir, exposure to Ontology-driven design in Foundry, adaptability, a proactive approach to problem-solving, and a passion for mentoring and growing engineering teams.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

OneDose is revolutionizing medication management through the utilization of advanced AI and data-driven solutions. The primary objective is to enhance the intelligence, safety, and accessibility of every dose on a large scale. Patients often face challenges such as cost constraints, availability issues, or allergies, resulting in missed medications. Addressing this multifaceted clinical and supply chain dilemma necessitates seamless data integration, real-time intelligence, and precise recommendations. The responsibilities include integrating formulary data, supplier inventories, salt compositions, and clinical guidelines into a unified ontology. Moreover, developing a clinical decision support system that offers automated suggestions and deploying real-time recommendation pipelines using Foundry's Code Repositories and Contour (ML orchestration layer). The role of Palantir Foundry Developer is a full-time, on-site position based in Jaipur. The key responsibilities involve constructing and managing data integration pipelines, creating analytical models, and enhancing data workflows using Palantir Foundry. Daily tasks encompass collaborating with diverse teams, troubleshooting data-related issues, and ensuring data quality and adherence to industry standards. The ideal candidate should possess profound expertise in Palantir Foundry ranging from data integration to operational app deployment. Demonstrated experience in constructing data ontologies, data pipelines (PySpark, Python), and production-grade ML workflows is essential. A solid grasp of clinical or healthcare data (medication data, EHRs, or pharmacy systems) is highly advantageous. Additionally, the ability to design scalable, secure, and compliant data solutions for highly regulated environments is crucial. A strong passion for addressing impactful healthcare challenges through advanced technology is desired. A Bachelor's degree in Computer Science, Data Science, or a related field is required. Joining OneDose offers the opportunity to make a significant impact by enhancing medication accessibility and patient outcomes in India and globally. You will work with cutting-edge technologies like Palantir Foundry, advanced AI models, and scalable cloud-native architectures. The work environment promotes ownership, growth, innovation, and leadership, enabling you to contribute to shaping the future of healthcare.,

Posted 2 months ago

Apply

2.0 - 15.0 years

0 - 0 Lacs

hyderabad, telangana

On-site

You are a highly skilled and hands-on Palantir Tech Lead with over 15 years of overall IT experience, including at least 1.5 to 2 years of recent experience working on Palantir Foundry. Your expertise in Python and PySpark is strong, and you have deep experience in data engineering and large-scale data pipeline development. As a key member of our team in Hyderabad, you will lead data engineering efforts on strategic data initiatives, collaborating with business SMEs to design and build front-end applications using Palantir Foundry tools. Your responsibilities will include implementing and maintaining Palantir Ontology Objects, data pipelines, and data transformations, as well as utilizing advanced knowledge of SQL, Python, and PySpark to build scalable data workflows. You will be managing CI/CD tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline, and monitoring platform performance using alerts and monitoring tools. Active participation in Agile/Scrum ceremonies, creating comprehensive documentation for data catalogs and application workflows, adapting and maintaining applications to meet evolving business needs, and mentoring junior engineers will be part of your role. Additionally, you will be communicating complex technical ideas clearly to non-technical stakeholders and business leaders. The ideal candidate will possess strong communication and stakeholder management skills, with the ability to work full-time onsite in Hyderabad. Preferred traits include prior experience in building enterprise-scale analytics applications using Palantir, exposure to Ontology-driven design in Foundry, adaptability, a proactive approach to problem-solving, and a passion for mentoring and growing engineering teams. This is an onsite, leadership role with a 6+ months contract that is extendible, offering a budget of 32 - 36 LPA.,

Posted 2 months ago

Apply

1.0 - 5.0 years

3 - 8 Lacs

Bengaluru

Work from Office

" "Role & responsibilities Bachelors degree in Statistics, Data Analytics, Computer Science or related field - Proficient in MS Office, SAP Business Objects, Tableau, CRM (Salesforce.com), Palantir, KNIME or similar - Proficient in SQL, R, Python, PySpark, VBA - Proficient in Excel functions such as complex formulas, macros, Power Pivots - Proficient working with extracting, formatting, validating and analyzing large data sets - Experience using big data analytics applications and programs such as R preferred - Good analytical and problem-solving skills. - Good communication (phone, e-mails, face to face) and interpersonal skills. - Business acumen and supportive mindset - Ability to thrive in a complex matrix environment - Good practice in prioritization and focusing on key tasks - A sound knowledge of written and spoken English - Experience: Analyst 1-3 years, Sr Analyst 3-5 years in the life science, pharmaceutical or biotech industry preferred - Proficient in self-organization, proactive time management, structured planning & execution, customer orientation Responsibilities: - Work on Tableau adhoc sales reports - Creation of Tableau Dashboards and maintenance supporting Sales Rep and Managers - Reporting Mailbox management and SFDC chatters for handling user queries - Acting as the key contact point for Sales Management for all sales territory and commission related questions - identify missing sales, resolve sales recognition issues, manage territory changes and account assignments, as well as crediting adjustments - Maintaining territory alignment files, customer and account assignments in Palantir - Working directly with Sales Mgmt. to support them in data analytics topics - Usage of modern analytics methods to support the development of fact driven mgmt. strategies - Supporting international internal customers with data analysis and reporting. - Adhere to TAT and Quality for all the process - Work on adhoc requests or additional responsibilities when asked/requested. - Sr Analyst : Create insightful reporting and dashboards for the commercial org. to increase salesforce efficiencies. Coordinate with internal and external stakeholders as necessary. Support Projects in Palantir & Tableau, maintain and support ISO documents. Development of analytics tools in Tableau, Palantir Big Data environment and SAP BI. Execute assigned projects as needed and recommend and deliver process improvements Preferred candidate profile

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 14 Lacs

Noida, Pune, Bengaluru

Work from Office

Hiring for Palantir Foundry from 4 to 6 Years only - India. Proficiency in Pyspark, in Palantir foundry and its application. Please share your cv to ranjitha@promantusinc.com Regards, Ranjitha 7619598141.

Posted 2 months ago

Apply

3.0 - 6.0 years

4 - 9 Lacs

Hyderabad

Hybrid

Role & responsibilities Key Responsibilities: Design, develop, and optimize data pipelines and workflows using Python, Spark , and SQL . Work on data integration and transformation using the Palantir Foundry platform . Collaborate with data engineers, data scientists, and business stakeholders to build scalable and efficient data solutions. Implement reusable code components, automate data processes, and ensure data quality. Provide support for troubleshooting and resolving data-related issues in Foundry. Write clean, maintainable, and efficient code following best practices. Participate in code reviews, sprint planning, and other Agile ceremonies. Mandatory Skills: Python Strong programming and scripting skills SQL – Strong querying and data manipulation Apache Spark – Experience with distributed data processing Palantir Foundry – Good working knowledge and hands-on experience Good understanding of data modeling , ETL , and data pipelines

Posted 2 months ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Role & responsibilities Developing back-end code logic that leverages semantic object linking (ontologies) within Palantir Foundry Pipeline Builder, Code Workbook, and Ontology Manager. Creating servers, databases, and datasets for functionality as needed. Ensuring health of data connections and pipelines (utilizing filesystem, JDBC, SFTP, and webhook). Ensuring conformance with security protocols and markings on sensitive data sets. Ensuring responsiveness of web applications developed on low code/no code solutions. Ensuring cross-platform optimization for mobile phones. Seeing through projects from conception to finished product. Proficiency with fundamental front-end languages such as HTML, CSS, MySQL, Oracle, MongoDB and JavaScript preferred. Proficiency with server-side languages for structured data processing- Python, Py Spark, Java, Apache Spark, and Spark SQL preferred.

Posted 2 months ago

Apply

3.0 - 8.0 years

8 - 18 Lacs

Hyderabad

Work from Office

Hyderabad or Remote Responsibilities: * Design, develop, and maintain Palantir platforms using Foundry/Gotham/Apollo technologies. * Collaborate with cross-functional teams on project delivery and support. Send your resume to tanweer@cymbaltech.com

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Job Summary Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. 6+ Years of experience in data engineering or analytics with 2+ years leading Palantir Foundry/Gotham implementations. Strong understanding of data integration, transformation, and modeling techniques. Proficiency in Python, SQL, and experience with pipeline development using Palantir tools. Understanding of Banking and Financial Services industry. Excellent communication and stakeholder management skills. Experience with Agile project delivery and cross-functional team collaboration.

Posted 2 months ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role: As a Senior Data Risk Manager , you will play a central role in shaping how Swiss Re identifies, assesses, and governs operational risks linked to data. Sitting in the 2nd Line of Defence, you will provide independent oversight, advise on control effectiveness, and challenge risk-taking decisions related to data use, storage, quality, lineage, and security. You'll also have the opportunity to influence our approach to data-related risks in AI and emerging technologies, helping shape governance practices that extend across a global enterprise. Key Responsibilities: Design and enhance Swiss Re's Data Risk Control Framework by identifying and embedding key controls across the data lifecycle. Challenge and advise 1st Line teams on risk identification, assessment, and control adequacy related to data management and digital processes. Lead risk reviews and thematic assessments across digital services, systems, or strategic technology projects to surface and address data management risks. Monitor implementation of data risk controls across business units and functions, gathering feedback to support continuous improvement. Establish risk reporting and monitoring standards for data management risks at Group level, providing clear risk insights to senior stakeholders. Assess AI-related data risks , ensuring alignment with applicable internal governance and external regulatory frameworks. Engage regularly with senior stakeholders , promoting a strong risk culture and influencing data governance behaviour across the organisation. About the team: The Digital & Technology Risk Management (DTRM) team acts as the 2nd Line of Defence for all digital and technology-related risks at Swiss Re. We provide independent oversight, challenge, and insight across Swiss Re's global digital landscape. Serving as an independent partner to the business, we help shape the Group's risk posture across various technology domains, ranging from infrastructure and application security to digital innovation and AI. Our commitment lies in driving high standards of resilience, informed risk-taking, and sound control practices through strong engagement and credible challenge. From reviewing control frameworks to assessing emerging risks, we help shape responsible innovation and build resilience into every layer of our technology environment. About you: We are looking for a confident and forward-thinking risk professional with a deep understanding of data governance and its associated risks. Experience & Capabilities Minimum 7 years of experience in operational risk, digital/technology risk, or data governance roles-preferably within financial services, reinsurance, or consulting. Familiarity with data lifecycle and records management frameworks (e.g., DAMA-DMBOK) and their practical application across large organisations. Proven experience conducting risk assessments, spot checks , and thematic reviews in a complex, regulated environment. Technical & Tooling Familiarity with data quality assurance techniques , metadata management, and lineage tracking. Proficient in using data governance platforms (e.g., Collibra , Palantir Foundry ) and supporting tools to analyse or visualise data flows and risks. Strong understanding of AI/ML data governance risks and regulatory developments (e.g., GDPR, AI Act, data ethics frameworks). Behavioural & Interpersonal Comfortable working independently , including collaboration with managers or stakeholders in different time zones. Strong stakeholder engagement and communication skills with the ability to influence and challenge at all levels. Demonstrated ability to balance business enablement with effective risk management . Certifications (Desirable) Certified Data Management Professional (CDMP) Certified in Risk and Information Systems Control (CRISC) Other data or risk-related qualifications are a plus Keywords: Reference Code: 134393

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Work from Office

Were Lear for You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better. To know more about Lear please visit our career site: www.lear.com Job Title: Lead Data Engineer Function: Data Engineer Location: Bhosari, Pune Position Focus: As a Lead Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If youre passionate about data engineering and have a track record of excellence, this role is for you! Job Description Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lears data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. Tools and Technologies: Utilize key tools within Palantir Foundry, including: Pipeline Builder: Author data pipelines using a visual interface. Code Repositories: Manage code for data pipeline development. Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organizations needs. Implement optimum build time to ensure effective utilization of resource. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundrys data engineering capabilities. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. analysis and to provide adequate explanation for the monthly, quarterly and yearly analysis. Oversees all accounting procedures and systems and internal controls used in the company. Supports the preparation of budgets and financial reports, including income statements, balance sheets, cash flow analysis, tax returns and reports for Government regulatory agencies. Motivates the immediate reporting staff for better performance and effective service and encourage team spirit. Coordinates with the senior and junior management of other departments as well, as every department in the organization is directly or indirectly associated with the finance department. Education: Bachelors or masters degree in computer science, Engineering, or a related field. Experience: Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Strong understanding of Palantir Foundry and its capabilities. Familiarity with big data technologies (e.g., Spark, Hadoop, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities.

Posted 2 months ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Description: Data Engineer I (F Band) About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134086

Posted 3 months ago

Apply

5.0 - 10.0 years

1 - 2 Lacs

Hyderabad

Work from Office

3 years of leading experience Strong in #Python programming, #Pyspark queries and #Palantir. Experience using tools such as: #Git/#Bitbucket, #Jenkins/#CodeBuild, #CodePipeline

Posted 3 months ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134085

Posted 3 months ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Work Responsibilities The Palantir Developer will be responsible for designing and implementing modern data architecture solutions that facilitate enterprise-level transformation. Key responsibilities include: Data Architecture Design: Create and optimize modern data architectures that support advanced analytics and operational requirements. Pipelining: Develop and maintain efficient data pipelines using Palantir Foundry to ensure seamless data flow and accessibility for analytics. Advanced Analytics: Create and deploy advanced analytics products that provide actionable insights to stakeholders, enhancing decision-making processes. Artificial Intelligence Integration: Collaborate with data scientists to incorporate AI and machine learning models into data pipelines and analytics products, enabling predictive capabilities. Agentic AI Exposure: Leverage knowledge of Agentic AI to develop systems that can autonomously make decisions and take actions based on data insights, enhancing operational capabilities. Collaboration: Work closely with cross-functional teams, including data scientists, engineers, and business analysts, to gather requirements and deliver tailored solutions. Cloud Technologies: Utilize cloud-based tools and services to enhance scalability, security, and performance of data solutions. Best Practices: Implement best practices for data governance, quality, and security to maintain data integrity and compliance with relevant regulations. Continuous Improvement: Identify opportunities for process improvements and automation to enhance operational efficiency within data ecosystems. Documentation: Maintain comprehensive documentation of data architecture designs, pipeline configurations, and analytics processes. The Team Artificial Intelligence & Data Engineering: In this age of disruption, organizations need to embrace data-driven decision-making to Deliver Enterprise Value. Our Team Leverages Data, Analytics, Robotics, And Cognitive Technologies To Uncover Insights And Drive Transformation In Business. Key Initiatives Include Data Ecosystem Implementation: Collaborate with clients to implement large-scale data ecosystems that integrate structured and unstructured data for comprehensive insights. Predictive Analytics: Utilize machine learning and predictive modeling techniques to derive actionable insights and predict future scenarios. AI Solutions Development: Work on developing AI-driven solutions that enhance data analytics capabilities, including natural language processing (NLP), computer vision, and recommendation systems. Agentic AI Development: Engage in projects that involve the development and deployment of Agentic AI systems capable of autonomous decision-making and action-taking based on real-time data. Operational Efficiency: Drive operational efficiency by utilizing automation and cognitive techniques for data management, ensuring timely and accurate reporting. Client Engagement: Engage with clients to understand their unique challenges and tailor solutions that align with their strategic objectives. Innovative Solutions: Research and implement innovative technologies and methodologies that enhance data analytics capabilities and drive business value. Training and Support: Provide training and support to clients on data tools and platforms to ensure they can maximize the value of their data assets. Qualifications Required: Education: Bachelors degree in Computer Science, Data Science, Engineering, or a related field. Experience 3+ years of hands-on experience in data extraction and manipulation using various tools and programming languages. 3+ years of experience in engineering and developing Palantir pipelines, with a strong understanding of data integration techniques. 3+ years of experience collaborating with Palantir Foundry data scientists and engineers on complex data projects. 2+ years of experience working with AI and machine learning technologies, including model development, deployment, and performance tuning. Familiarity with Agentic AI concepts and applications, including experience developing or working with autonomous systems is a plus. Technical Skills: Proficiency in programming languages such as Python, SQL, or R, along with experience in statistical analysis and machine learning techniques. Problem-Solving: Strong analytical and problem-solving skills, with the ability to think critically and creatively. Communication: Excellent interpersonal and communication skills to effectively convey technical concepts to non-technical stakeholders.

Posted 3 months ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a skilled Palantir Foundry Developer with strong hands-on experience in data engineering using PySpark and SQL . The ideal candidate should be proficient in designing, building, and maintaining scalable data pipelines and integrating with Palantir Foundry environments. Key Skills: Palantir Foundry (Mandatory) PySpark, Advanced SQL and Data Modelling Data Pipeline Development and Optimization ETL Processes, Data Transformation

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad

Remote

Hiring for Top MNC: For long term contract Data Engineer - Palantir Technical Capability Foundry Certified (Data Engineering) Foundry Certified (Foundational) Time Series Data Equipment & Sensors - O&G Context and Engineering Ontology Manager Pipeline Builder Data Linerage Object Explorer Python & Spark (PySpark) -specifically PySpark which is the extension of the big data platform Spark that Foundry uses. SQL Mesa (Palantir proprietary language) Experience: 5+ Years Soft Skills: Strong Communication Skills (focus on O&G enginnering) abilty to engage with multiple Product Manager's . Ability to work independenty and voice of authority Interested candidates can share their resume: tejasri.m@i-q.co

Posted 3 months ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Position: Palantir Foundry & Pyspark Data Engineer Location: Hyderabad (PG&E Office) Key Skills: Palantir Foundry, Python, spark, AWS, Pyspark Experience: 6 -10 Years will be perfect fit Responsibilities: Preferred candidate having experience with Palantir Foundry (Code Repository, Contour, Data connection and workshop). Palantir Foundry experience is must to have. Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborate with product and technology teams to design and validate the capabilities of the data platform Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability Provide technical support and usage guidance to the users of our platforms services. Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications: Experience building and optimizing data pipelines in a distributed environment Experience supporting and working with cross-functional teams Proficiency working in Linux environment 4+ years of advanced working knowledge of Palantir Foundry, SQL, Python, and PySpark 2+ years of experience with using a broad range of AWS technologies Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools

Posted 3 months ago

Apply

8.0 - 13.0 years

5 - 12 Lacs

Mysuru, Pune

Hybrid

Role & responsibilities 4+ years of experience as a Data Engineer or similar role - 3+ of years experience building data solutions at scale using one of the Enterprise Data platforms – Palantir Foundry, Snowflake, Cloudera/Hive, Amazon Redshift - 3+ years of experience with SQL and No-SQL databases (Snowflake or Hive) - 3+ years of hands-on experience with programming using Python, Spark or C# - Experience with DevOps principals and CI/CD - Strong understanding of ETL principles and data integration patterns - Experience with Agile and iterative development processes is a plus - Experience with cloud services such as AWS, Azure etc. and other big data tools like Spark, Kafka, etc. is a plus (not mandatory) - Knowledge of Typescript & Full stack development experience is a plus (not mandatory)

Posted 3 months ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Hyderabad

Hybrid

About We are hiring a Lead Data Solutions Engineer with expertise in PySpark, Python, and preferably Palantir Foundry. You will focus on transforming complex operational data into clear customer communications for Planned Power Outages (PPO) within the energy sector. Role & responsibilities Build, enhance, and manage scalable data pipelines using PySpark and Python to process dynamic operational data. Interpret and consolidate backend system changes into single-source customer notifications. Leverage Foundry or equivalent platforms to build dynamic data models and operational views. Act as a problem owner for outage communication workflows and edge cases. Collaborate with operations and communication stakeholders to ensure consistent message delivery. Implement logic and validation layers to filter out inconsistencies in notifications. Continuously optimize data accuracy and message clarity. Preferred candidate profile Ideal Profile 5+ years of experience in data engineering/data solutions. Strong command of PySpark, Python, and large-scale data processing. Experience in dynamic, evolving environments with frequent changes. Strong communication and collaboration skills. Ability to simplify uncertain data pipelines into actionable formats. Nice to Have Experience with Palantir Foundry, Databricks, or AWS Glue. Exposure to utility, energy, or infrastructure domains. Familiarity with customer communication systems, SLA governance, or outage scheduling.

Posted 3 months ago

Apply

10 - 17 years

20 - 25 Lacs

Kolkata

Remote

Role & Responsibilities Design and implement a next-generation digital twin platform for healthcare payer workflows. Model healthcare processes into DTDL or ontology formats using Azure Digital Twins or Palantir Foundry. Lead integration of Celonis and Camunda for process mining and workflow automation. Develop and maintain real-time telemetry pipelines to enable live optimization via AI agents. Collaborate with cross-functional teams, including product managers, AI/ML engineers, and domain experts. Own architectural decisions ensuring scalability, security, and compliance with healthcare standards. Participate in technical reviews, performance tuning, and roadmap planning. Preferred Candidate Profile 10+ years of experience in healthcare technology, digital twin, or enterprise architecture. Proven expertise in tools like Azure Digital Twins, Palantir Foundry, Celonis, and Camunda. Deep understanding of payer-side operations and medical management workflows. Familiarity with IoT/event-streaming architectures and AI/ML pipeline integration. Strong grasp of healthcare data standards such as FHIR and HL7 is a plus. Excellent communication and stakeholder management skills. Prior experience delivering enterprise healthcare solutions for Indian or global clients is desirable

Posted 4 months ago

Apply

7 - 12 years

8 - 18 Lacs

Hyderabad

Remote

Job Title: Digital Twin Architect Healthcare (India / Remote) Location : Remote (India-based candidates preferred) Experience : 7+ Years Start Date : Immediate / As per notice period Type : Full-Time / Contract About the Role We are hiring experienced Digital Twin Architects to lead the design and build of a next-gen digital twin platform for medical management in the healthcare payer domain . This role involves working on Azure Digital Twins , Palantir Foundry , Celonis , Camunda , and integrating AI/telemetry pipelines for real-time feedback and optimization. If you have strong healthcare domain knowledge, especially in payer systems , and hands-on experience in digital-twin/ontology solutions, this is a unique opportunity to work on a global project with cutting-edge technology. Key Responsibilities Build and manage a digital-twin platform for payor-side medical workflows. Translate processes into DTDL or ontology models using Azure Digital Twins / Palantir Foundry. Use Celonis / Camunda for process mining and workflow improvements. Integrate AI agents and telemetry pipelines to enable live optimization. Collaborate with cross-functional teams (product, AI/ML, and business stakeholders). Own architectural decisions and ensure scalability and compliance. Must-Have Skills 7+ years of experience in digital twin , graph/ontology , or healthcare architecture . Strong understanding of payer operations / medical management workflows . Experience with tools such as: Azure Digital Twins or Palantir Foundry Celonis , Camunda Event-streaming/ IoT pipelines Excellent communication and stakeholder management skills. Good to Have Knowledge of FHIR / HL7 standards . Experience with AI/ML agent integration in live environments. Background in enterprise healthcare product implementation (India or global clients). Why Join Us? Opportunity to work on a global healthcare product . Be part of a high-impact project using the latest in AI, IoT, and digital twin tech. Remote-first culture with cross-border collaboration. Competitive compensation and flexible work hours.

Posted 4 months ago

Apply

5 - 10 years

10 - 18 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Palantir Foundry Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years of full term education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to specifications and delivered on time. Your typical day will involve collaborating with the team to understand the requirements, designing and coding the applications, and testing and debugging them to ensure they function properly. You will also be involved in troubleshooting and resolving any issues that arise during the development process. Your creativity and technical expertise will play a crucial role in delivering high-quality applications. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and build applications according to business process and application requirements. - Configure applications to ensure they meet the specified functionality. - Collaborate with the team to understand the requirements and translate them into technical specifications. - Code and test applications to ensure they function properly. - Troubleshoot and resolve any issues that arise during the development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Palantir Foundry. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in Palantir Foundry. - This position is based at our Bengaluru office. - A 15 years of full term education is required. 15 years of full term education

Posted 4 months ago

Apply

2.0 - 15.0 years

7 - 25 Lacs

pune, chennai, bengaluru

Work from Office

Requirements: Experience level – 2+ years. Experience: Extensive hands-on experience with Palantir Foundry to managing end to end development of Data Products. Technical Expertise: In-depth knowledge of the Palantir Foundry platform, including data integration (Data Connections), Data Transformation (Code Repository & Pipeline Builder), analysis (Contour and Quiver), visualization (Workshop) and Ontology Manager. Must also have a good knowledge of Spark (PySpark) & TypeScript. Knowledge on AIP is an added advantage. Responsibilities: Team Management: Lead and manage a team of technical professionals – Data Engineer and Application Developers, ensuring effective collaboration and productivity. Client Interaction: Serve as the primary point of contact for clients, understanding their needs and ensuring successful project delivery. Status Reporting: Prepare and present detailed status reports on ongoing projects to stakeholders, highlighting progress, risks, and mitigation plans. Issue Resolution: Proactively identify and resolve any technical challenges or roadblocks faced by the team, ensuring smooth project execution. Project Management: Oversee the entire project lifecycle, from planning to execution, ensuring timely delivery within scope and budget.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies