Home
Jobs

2 Pipeline Tools Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

3 - 8 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

About Role Working within a global team, the role will be responsible for implementing IT initiatives that supports and continually improves our operational efficiency, driving automation, improving cost effectiveness, and creating operational excellence. Will include championing technology innovation and change, working with innovation stakeholders across the business, creating a culture of collaboration. Requires previous experience of implementing Kubernetes within an organization, and to be proficient with infrastructure / platform automation, with the ability to evangelize these across the organization. Is primarily focused on services, ensuring we meet their SLAs and efficiency targets, providing support for solutions based upon principles of high-availability, agility, scale and efficiency. Requires working with: Agile frameworks Infrastructure As Code Infrastructure and Network technologies. Orchestration of build and deployment of containerized environments. Building and maintaining virtualization automation, monitoring, and reporting tools. Troubleshooting, resolving, and assisting in complex environmental issue resolution. Participation in automation initiatives driving change and efficiencies. Mentoring and sharing knowledge with other members of the team through retros, planning meetings and daily standups. All About You 3+ years experience with building and operating on-premises Kubernetes (ideally OpenShift). CKA, CKAD, or CKS certifications a plus. Solid infrastructure experience. Networking, storage, and compute (ideally VMWare). Demonstrated ability to organize, manage, plan and control several concurrent initiatives with conflicting needs. Track record of successful delivery in a large enterprise environment. Familiarity or working knowledge of public cloud patterns (AWS/EKS, Azure/AKS); container tools (Kubernetes, Docker); pipeline tools (Jenkins, Ansible, Terraform); ancillary (Artifactory, Hashicorp Vault); logging and monitoring (Loki, ELK, Prometheus, Kibana, Splunk, Dynatrace); scripting (Python, Bash, Go).

Posted 16 hours ago

Apply

7.0 - 11.0 years

15 - 25 Lacs

Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Key Responsibilities: Should have experience in below Design, develop, and implement a Data Lake House architecture on AWS, ensuring scalability, flexibility, and performance. Build ETL/ELT pipelines for ingesting, transforming, and processing structured and unstructured data. Collaborate with cross-functional teams to gather data requirements and deliver data solutions aligned with business needs. Develop and manage data models, schemas, and data lakes for analytics, reporting, and BI purposes. Implement data governance practices, ensuring data quality, security, and compliance. Perform data integration between on-premise and cloud systems using AWS services. Monitor and troubleshoot data pipelines and infrastructure for reliability and scalability. Skills and Qualifications: 7 + years of experience in data engineering, with a focus on cloud data platforms. Strong experience with AWS services: S3, Glue, Redshift, Athena, Lambda, IAM, RDS, and EC2. Hands-on experience in building data lakes, data warehouses, and lake house architectures. Should have experience in ETL/ELT pipelines using tools like AWS Glue, Apache Spark, or similar. Expertise in SQL and Python or Java for data processing and transformations. Familiarity with data modeling and schema design in cloud environments. Understanding of data security and governance practices, including IAM policies and data encryption. Experience with big data technologies (e.g., Hadoop, Spark) and data streaming services (e.g., Kinesis, Kafka). Have lending domain knowledge will be added advantage Preferred Skills: Experience with Databricks or similar platforms for data engineering. Familiarity with DevOps practices for deploying data solutions on AWS (CI/CD pipelines). Knowledge of API integration and cloud data migration strategies.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies