Bengaluru, Delhi / NCR, Mumbai (All Areas)
INR 0.5 - 1.25 Lacs P.A.
Hybrid
Full Time
Role Description: We are looking for a DevOps Engineer to develop and support integrated automation solutions. This individual will work in a team setting, working to create & maintain CI/CD pipelines, troubleshoot, enhance & manage our software delivery pipelines, manage our cloud infrastructure and create tooling to streamline and enhance the current automation workflows and help build new workflows for next-generation solutions. Responsibilities: Be the main point of contact for our team for Cloud and Infrastructure management Develop, enhance and manage deployment automation pipelines implementing DevOps best practices Manage cloud infrastructure platforms Identify the scope of enhancements to our infrastructure and automation pipelines Assist in automating in-house processes Configuration Management for Cloud based infrastructure Manage and design deployment processes implementing cloud infrastructure security Requirements: 3+ years of well-rounded experience as a DevOps engineer Experience in software development for cloud-based applications and container- based solutions (Docker, Docker Swarm, Kubernetes) Computer Science degree or relevant experience in Software Development Experience handling large data sets Cloud management experience (primarily AWS and Azure) Setup and management of databases (MySQL, PostgreSQL), database services, virtual machines, virtual networks, IAM policies Comfortable with VCS GitHub and GitOps Experience in software development preferably in Java or Python preferred Experience in using Linux systems, and bash scripting Experience in automating software and deployment environments for CI/CD. Experience working together with teams from several departments Understanding of best practices regarding system architecture, design, security, throughput, availability and scalability
Bengaluru, Delhi / NCR, Mumbai (All Areas)
INR 30.0 - 35.0 Lacs P.A.
Hybrid
Full Time
As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities: Data Architecture and Design: Design and implement scalable and efficient data architectures to support the organization's data processing needs Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives ETL Development: Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation Big Data Technologies: Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy Implement and optimize big data technologies to process and analyze large datasets efficiently Cloud Integration: Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance Performance Monitoring and Optimization: Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues Optimize data processing workflows for improved efficiency and resource utilization Documentation: Maintain comprehensive documentation for data engineering processes, data models, and system architecture Ensure that team members follow documentation standards and best practices. Collaboration and Communication: Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 6-8 years of professional experience in data engineering In-depth knowledge of data modeling, ETL processes, and data warehousing. In-depth knowledge of building the data warehouse using Snowflake Should have experience in data ingestion, data lakes, data mesh and data governance Must have experience in Python programming Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka. Experience with cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Proven ability to work collaboratively in a fast-paced, dynamic environment.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.