4.0 years

0.0 Lacs P.A.

Maharashtra, India

Posted:1 week ago| Platform: Linkedin logo

Apply Now

Skills Required

datasqlserverpowergitkafkaconnectazurepythondatabricksreportsorchestrationcodemigratesupportprocessingwritingetlintegrationrabbitmqreliabilitymonitoringtroubleshootingengineeringqueryingdatabaseprogrammingjavarelationalcertificationsawsanalyticssparkhadoopgovernancecompliancejsoncontainerizationdockerlearning

Work Mode

On-site

Job Type

Full Time

Job Description

Job Description Role Tech Work Intermediate Role Mandatory - Ms SQL Server, Power BI or SSRS. GIT - Repo & CI/CD Optional / Nice To Have - Kafka (or Kafka Connect), Azure Cloud, Python, DataBricks Scripts, Stored Procs, Reports, SSIS/ Other Orchestration Tools, Check In Code, Transform and Migrate Code to Azure DataBricks. Roles and Responsibilites Developing and maintaining data pipelines to support real-time and batch processing. Writing and optimizing SQL queries, stored procedures, and scripts for data processing. Supporting ETL/ELT workflows for data integration and transformation. Collaborating with team members to integrate data from various sources into centralized systems. Implementing and managing data streaming solutions using platforms like Kafka or RabbitMQ. Ensuring data quality and reliability across all pipelines and processes. Monitoring and troubleshooting data pipelines to ensure performance and reliability. Documenting data workflows and providing support for data-related issues. Primary Skills Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 2–4 years of experience in data engineering or related roles. Strong SQL skills, including querying and optimizing database operations. Experience developing data pipelines for real-time and batch processing. Hands-on experience with data streaming platforms such as Kafka or RabbitMQ. Familiarity with ETL processes and tools. Proficiency in a programming language such as Python or Java for data tasks. Knowledge of data modelling basics for relational databases. Attention to detail and a commitment to ensuring data accuracy and reliability. Problem-solving skills and the ability to troubleshoot issues in data systems. Secondary Skills Relevant certifications (e.g., AWS Certified Data Analytics Specialty, Microsoft Certified: Azure Data Engineer, or Databricks Certified Data Engineer). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and cloud-based data services. Exposure to big data technologies such as Spark or Hadoop. Knowledge of data governance and compliance standards. Experience with data formats like JSON or Parquet. Basic understanding of containerization tools such as Docker. Interest in learning and adopting emerging data technologies. Coursework or certifications in data engineering or related fields. Show more Show less

No locations

RecommendedJobs for You

Hyderabad, Pune, Bengaluru