Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be required to have strong Python programming skills and expertise in Pyspark queries and AWS. As part of your responsibilities, you will be developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. Collaboration with product and technology teams to design and validate the capabilities of the data platform will be a key aspect of your role. You will also be expected to identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of the platform services will also be part of your responsibilities. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into production services. To be successful in this role, you should have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with expertise in PySpark queries being a must. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be beneficial. Experience with platform monitoring and alert tools will also be an advantage.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Platform Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more using popular open-source software, AWS, and GitLab automation. Your role will involve collaborating with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will be tasked with identifying, designing, and implementing process improvements, such as automating manual processes and optimizing for usability and scalability. A key aspect of your role will be to provide technical support and usage guidance to the users of our platform services. You will also drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into our production services. To be successful in this position, you should have experience in building and optimizing data pipelines in a distributed environment. You should also be comfortable working with cross-functional teams and have proficiency in a Linux environment. A minimum of 4 years of advanced working knowledge in SQL, Python, and PySpark is required, with a specific emphasis on PySpark queries. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline, and platform monitoring and alerts tools are also beneficial for this role.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborating with product and technology teams to design and validate the capabilities of the data platform will be a key part of your role. You will also identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of our platform services is also a crucial aspect of this position. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into our production services. The ideal candidate will have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with a strong emphasis on PySpark queries. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be advantageous. Experience with platform monitoring and alert tools is also desirable. This is a Data Engineer role based in Hyderabad with a contract duration of 12+ months, likely to be extended. Strong skills in Python programming, PySpark queries, and AWS are essential for this position, with Palantir being a secondary skill. If you are passionate about data engineering, possess the necessary technical skills, and thrive in a collaborative environment, we encourage you to apply for this exciting opportunity.,
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France