Palantir Tech Lead Skills: Python, Pyspark and Palantir Need a strong hands-on lead engineer. Onsite in Hyderabad. Exp 15+ Tasks and Responsibilities: Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader Show more Show less
Job Description Imaging and capture solutions for Document processing 8+ Years of relevant experience in ECM-Imaging and Capture products, solutions for Document processing Strong hands-on experience with OpenText Captiva (now Intelligent Capture) Good understanding of Imaging and capture concepts – Scan, Classify, Index, extract etc. Experience in executing successful implementations of document processing solutions in an enterprise environment Strong Knowledge with .NET and scripting Experience in integrating with REST API and applications like ECM repositories. Experience on other ECM tools like OpenText Documentum (added advantage) Experience in Agile Delivery, ability to manage team and adhere to the delivery plan. Strong interpersonal skills & ability to manage customer and internal stake holders Show more Show less
Palantir Data engineering Skills: Python, Pyspark Exp 5+ Tasks and Responsibilities: advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader
This is an on-site role based in Hyderabad. Below is a brief overview of what we’re looking for: Strong Experience with Python and PySpark coding Strong expertise with AWS Cloud Having a strong understanding or experience with Palantir Foundry For seniors, the required experience is 10+ years For juniors, the required experience is 3+ years Here is the job description as well: 100% onsite in the Hyderabad office. Key things: Python, PySpark, AWS, GIS 2 rounds of coding sessions. Responsibilities · Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. · Collaborate with product and technology teams to design and validate the capabilities of the data platform · Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability · Provide technical support and usage guidance to the users of our platform’s services. · Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications · Experience building and optimizing data pipelines in a distributed environment · Experience supporting and working with cross-functional teams · Proficiency working in a Linux environment · 5+ years of advanced working knowledge of SQL, Python, and PySpark · Knowledge or hands-on experience with Palantir Foundry · Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline · Experience with platform monitoring and alert tools
As a Senior Data Engineer with over 6 years of experience, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. You will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. Your role will involve identifying, designing, and implementing process improvements, such as automating manual processes, optimizing for usability, and re-designing for greater scalability. Additionally, you will provide technical support and usage guidance to the users of our platform services. Your key responsibilities will also include driving the creation and refinement of metrics, monitoring, and alerting mechanisms to provide the necessary visibility into our production services. To excel in this role, you should have experience in building and optimizing data pipelines in a distributed environment, as well as supporting and working with cross-functional teams. Proficiency in working in a Linux environment is essential, along with 4+ years of advanced working knowledge of SQL, Python, and PySpark. Knowledge of Palantir is preferred. Moreover, you should have experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline. Experience with platform monitoring and alert tools will be beneficial in fulfilling the requirements of this role. If you are ready to take on this challenging position and meet the qualifications mentioned above, please share your resume with Sunny Tiwari at stiwari@enexusglobal.com. We look forward to potentially having you as part of our team. Sunny Tiwari 510-925-0380,