Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborate with product and technology teams to design and validate the capabilities of the data platform. Identify, design, and implement process improvements, automating manual processes, optimizing for usability, and re-designing for greater scalability. Provide technical support and usage guidance to the users of our platforms services. Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. You should have experience building and optimizing data pipelines in a distributed environment. Experience supporting and working with cross-functional teams. Proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required. Knowledge of PySpark queries is a must. Knowledge of Palantir is also needed. Experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline. Experience with platform monitoring and alert tools.,
Posted 1 month ago
4.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
You are a highly skilled professional with over 15 years of experience and strong expertise in Python Programming, Pyspark queries, AWS, GIS, and Palantir Foundry. Your primary responsibilities will include developing and enhancing data-processing, orchestration, and monitoring using popular open-source software, AWS, and GitLab automation. You will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will be responsible for identifying, designing, and implementing process improvements, automating manual processes, optimizing usability, and redesigning for greater scalability. Your role will also involve providing technical support and usage guidance to the users of the platform's services. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into production services. To be successful in this position, you should have experience building and optimizing data pipelines in a distributed environment, working with cross-functional teams, and proficiency in a Linux environment. You must have at least 4 years of advanced working knowledge of SQL, Python, and PySpark queries. Knowledge of Palantir and experience with tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline is highly desirable. Additionally, experience with platform monitoring and alerting tools will be beneficial for this role.,
Posted 1 month ago
4.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
You have a great opportunity to join our team as a Senior Data Engineer with a strong focus on Python Programming, Pyspark queries, AWS, GIS, and Palantir Foundry. With over 15 years of experience in the field, you will play a crucial role in developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Your collaboration with product and technology teams will be essential in designing and validating the capabilities of the data platform. You will be responsible for identifying, designing, and implementing process improvements, automating manual processes, optimizing for usability, and re-designing for greater scalability. Providing technical support and usage guidance to the users of our platforms services will be a key aspect of your role. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. To excel in this role, you should have experience building and optimizing data pipelines in a distributed environment. Your ability to support and work with cross-functional teams will be vital. Proficiency in working in a Linux environment is a must, along with 4+ years of advanced working knowledge of SQL, Python, and PySpark. Knowledge of PySpark queries is a mandatory requirement. Familiarity with Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be beneficial. Experience with platform monitoring and alerts tools will also be considered a plus.,
Posted 2 months ago
4.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
You should be Strong in Python Programming and PySpark queries with experience in AWS, GIS, and Palantir Foundry. With over 15 years of experience, you will be based in Hyderabad and work in the office 5 days a week. Your responsibilities will include developing and enhancing data-processing, orchestration, and monitoring using open-source software, AWS, and GitLab automation. You will collaborate with product and technology teams to design and validate data platform capabilities. Additionally, you will identify, design, and implement process improvements, provide technical support to platform users, and drive the creation of metrics and monitoring mechanisms for production services. To qualify for this role, you should have experience in building and optimizing data pipelines in a distributed environment, working with cross-functional teams, and proficiency in a Linux environment. You should also have at least 4 years of advanced knowledge in SQL, Python, and PySpark, along with knowledge of Palantir. Experience with tools like Git/Bitbucket, Jenkins/Code Build, Code Pipeline, and platform monitoring tools is also required.,
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be required to have strong Python programming skills and expertise in Pyspark queries and AWS. As part of your responsibilities, you will be developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. Collaboration with product and technology teams to design and validate the capabilities of the data platform will be a key aspect of your role. You will also be expected to identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of the platform services will also be part of your responsibilities. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into production services. To be successful in this role, you should have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with expertise in PySpark queries being a must. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be beneficial. Experience with platform monitoring and alert tools will also be an advantage.,
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborating with product and technology teams to design and validate the capabilities of the data platform will be a key part of your role. You will also identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of our platform services is also a crucial aspect of this position. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into our production services. The ideal candidate will have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with a strong emphasis on PySpark queries. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be advantageous. Experience with platform monitoring and alert tools is also desirable. This is a Data Engineer role based in Hyderabad with a contract duration of 12+ months, likely to be extended. Strong skills in Python programming, PySpark queries, and AWS are essential for this position, with Palantir being a secondary skill. If you are passionate about data engineering, possess the necessary technical skills, and thrive in a collaborative environment, we encourage you to apply for this exciting opportunity.,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |