Jobs
Interviews

6 Jenkinscodebuild Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. You will collaborate with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will identify, design, and implement process improvements including automating manual processes, optimizing for usability, and re-designing for greater scalability. Providing technical support and usage guidance to the users of our platforms services will also be a key part of your role. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. To qualify for this position, you should have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 5 years of advanced working knowledge of SQL, Python, and PySpark is required. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, CodePipeline, and platform monitoring and alerts tools will be beneficial for this role.,

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Engineer with over 6 years of experience, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. You will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. Your role will involve identifying, designing, and implementing process improvements, such as automating manual processes, optimizing for usability, and re-designing for greater scalability. Additionally, you will provide technical support and usage guidance to the users of our platform services. Your key responsibilities will also include driving the creation and refinement of metrics, monitoring, and alerting mechanisms to provide the necessary visibility into our production services. To excel in this role, you should have experience in building and optimizing data pipelines in a distributed environment, as well as supporting and working with cross-functional teams. Proficiency in working in a Linux environment is essential, along with 4+ years of advanced working knowledge of SQL, Python, and PySpark. Knowledge of Palantir is preferred. Moreover, you should have experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline. Experience with platform monitoring and alert tools will be beneficial in fulfilling the requirements of this role. If you are ready to take on this challenging position and meet the qualifications mentioned above, please share your resume with Sunny Tiwari at stiwari@enexusglobal.com. We look forward to potentially having you as part of our team. Sunny Tiwari 510-925-0380,

Posted 2 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

hyderabad, telangana

On-site

You will be leading data engineering activities on moderate to complex data and analytics-centric problems that have a broad impact and require in-depth analysis to achieve desired results. Your responsibilities will include assembling, enhancing, maintaining, and optimizing current data, enabling cost savings, and meeting project or enterprise maturity objectives. Your role will require an advanced working knowledge of SQL, Python, and PySpark. You should also have experience using tools like Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline, as well as familiarity with platform monitoring and alerts tools. Collaboration with Subject Matter Experts (SMEs) is essential for designing and developing Foundry front-end applications with the ontology (data model) and data pipelines supporting these applications. You will be responsible for implementing data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications. Additionally, you will implement operational applications using Foundry Tools such as Workshop, Map, and/or Slate. Active participation in agile/scrum ceremonies (stand-ups, planning, retrospectives, etc.) is expected from you. Documentation plays a crucial role in this role, and you will create and maintain documentation describing data catalog and data objects. As applications grow in usage and requirements change, you will be responsible for maintaining these applications. A continuous improvement mindset is encouraged, and you will be expected to engage in after-action reviews and share learnings. Strong communication skills, especially in explaining technical concepts to non-technical business leaders, will be essential for success in this role.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

You will be required to have strong Python programming skills and expertise in Pyspark queries and AWS. As part of your responsibilities, you will be developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. Collaboration with product and technology teams to design and validate the capabilities of the data platform will be a key aspect of your role. You will also be expected to identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of the platform services will also be part of your responsibilities. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into production services. To be successful in this role, you should have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with expertise in PySpark queries being a must. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be beneficial. Experience with platform monitoring and alert tools will also be an advantage.,

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Platform Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more using popular open-source software, AWS, and GitLab automation. Your role will involve collaborating with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will be tasked with identifying, designing, and implementing process improvements, such as automating manual processes and optimizing for usability and scalability. A key aspect of your role will be to provide technical support and usage guidance to the users of our platform services. You will also drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into our production services. To be successful in this position, you should have experience in building and optimizing data pipelines in a distributed environment. You should also be comfortable working with cross-functional teams and have proficiency in a Linux environment. A minimum of 4 years of advanced working knowledge in SQL, Python, and PySpark is required, with a specific emphasis on PySpark queries. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, Code Pipeline, and platform monitoring and alerts tools are also beneficial for this role.,

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborating with product and technology teams to design and validate the capabilities of the data platform will be a key part of your role. You will also identify, design, and implement process improvements, automate manual processes, optimize for usability, and redesign for greater scalability. Providing technical support and usage guidance to the users of our platform services is also a crucial aspect of this position. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to provide visibility into our production services. The ideal candidate will have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 4 years of advanced working knowledge of SQL, Python, and PySpark is required, with a strong emphasis on PySpark queries. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and Code Pipeline will be advantageous. Experience with platform monitoring and alert tools is also desirable. This is a Data Engineer role based in Hyderabad with a contract duration of 12+ months, likely to be extended. Strong skills in Python programming, PySpark queries, and AWS are essential for this position, with Palantir being a secondary skill. If you are passionate about data engineering, possess the necessary technical skills, and thrive in a collaborative environment, we encourage you to apply for this exciting opportunity.,

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies