Posted:2 weeks ago| Platform:
On-site
Full Time
Senior Data Engineer (6-7 Years Experience minimum) Location: Mohali, Punjab (Full-Time, Onsite) Company: Data Couch Pvt. Ltd. About Data Couch Pvt. Ltd. Data Couch Pvt. Ltd. is a premier consulting and enterprise training company specializing in Data Engineering, Big Data, Cloud Technologies, DevOps, and AI/ML . With a strong presence across India and global client partnerships, we deliver impactful solutions and upskill teams across industries. Our expert consultants and trainers work with the latest technologies to empower digital transformation and data-driven decision-making for businesses. Technologies We Work With At Data Couch, you’ll gain exposure to a wide range of modern tools and technologies, including: Big Data: Apache Spark, Hadoop, Hive, HBase, Pig Cloud Platforms: AWS, GCP, Microsoft Azure Programming: Python, Scala, SQL, PySpark DevOps & Orchestration: Kubernetes, Docker, Jenkins, Terraform Data Engineering Tools: Apache Airflow, Kafka, Flink, NiFi Data Warehousing: Snowflake, Amazon Redshift, Google BigQuery Analytics & Visualization: Power BI, Tableau Machine Learning & MLOps: MLflow, Databricks, TensorFlow, PyTorch Version Control & CI/CD: Git, GitLab CI/CD, CircleCI Key Responsibilities Design, build, and maintain robust and scalable data pipelines using PySpark Leverage Hadoop ecosystem (HDFS, Hive, etc.) for big data processing Develop and deploy data workflows in cloud environments (AWS, GCP, or Azure) Use Kubernetes to manage and orchestrate containerized data services Collaborate with cross-functional teams to develop integrated data solutions Monitor and optimize data workflows for performance, reliability, and security Follow best practices for data governance , compliance, and documentation Must-Have Skills Proficiency in PySpark for ETL and data transformation tasks Hands-on experience with at least one cloud platform (AWS, GCP, or Azure) Strong grasp of Hadoop ecosystem tools such as HDFS, Hive, etc. Practical experience in Kubernetes for service orchestration Proficiency in Python and SQL Experience working with large-scale, distributed data systems Familiarity with tools like Apache Airflow , Kafka , or Databricks Experience working with data warehouses like Snowflake, Redshift, or BigQuery Exposure to MLOps or integration of AI/ML pipelines Understanding of CI/CD pipelines and DevOps practices for data workflows What We Offer Opportunity to work on cutting-edge data projects with global clients A collaborative, innovation-driven work culture Continuous learning via internal training, certifications, and mentorship Competitive compensation and growth opportunities Job Type: Full-time Pay: ₹1,200,000.00 - ₹15,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Schedule: Day shift Work Location: In person
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
mohali, punjab
Experience: Not specified
Salary: Not disclosed
INR 0.2 - 0.4 Lacs P.A.