Job
                                Description
                            
                            
                                The opportunity  
The Data Engineer, Specialist is responsible for building and maintaining scalable data pipelines to ensure seamless integration and efficient data flow across various platforms. This role involves designing, developing, and optimizing ETL (Extract, Transform, Load) processes, managing databases and leveraging big data technologies to support analytics and business intelligence initiatives.  Your key responsibilities   
 
Design, develop and maintain scalable ETL (Extract, Transform, Load) processes to efficiently extract data from various structured and unstructured sources, ensuring accuracy, consistency and performance optimization. 
Architect and manage database systems to support large-scale data storage and retrieval, ensuring high availability, security and efficiency in handling complex datasets. 
Integrate and transform data from multiple sources including APIs, on-premises databases and cloud storage, creating unified datasets to support data-driven decision-making across the organization. 
Collaborate with business intelligence analysts, data scientists and other stakeholders to understand specific data needs, ensuring the delivery of high-quality, business-relevant datasets. 
Monitor, troubleshoot and optimize data pipelines and workflows to resolve performance bottlenecks, improve processing efficiency and ensure data integrity. 
Develop automation frameworks for data ingestion, transformation and reporting to streamline data operations and reduce manual effort. 
Work with cloud-based data platforms and technologies such as AWS (Redshift, Glue, S3), Google Cloud (Big Query, Dataflow), or Azure (Synapse, Data Factory) to build scalable data solutions. 
Optimize data storage, indexing, and query performance to support real-time analytics and reporting, ensuring cost-effective and high-performing data solutions. 
Lead or contribute to special projects involving data architecture improvements, migration to modern data platforms, or advanced analytics initiatives. 
  Skills and attributes for success   
 
A team player with strong analytical, communication and interpersonal skills 
Constantly updating yourself about new technologies in the market 
A winning personality and the ability to become a trusted advisor to the stakeholders 
  To qualify for the role, you must have   
 
Minimum 5 years of relevant work experience, with at least 2 years in designing and maintaining data pipelines, ETL processes and database architectures. 
Bachelors degree (B.E./B.Tech) in Computer Science or IT, or Bachelors in Data Science, Statistics, or related field. 
Strong expertise in SQL, Python, or Scala for data processing, automation and transformation. 
Hands-on experience with big data frameworks such as Apache Spark, Hadoop, or Kafka for real-time and batch processing. 
Experience in cloud data platforms including AWS Redshift, Google Big Query, or Azure Synapse, with proficiency in cloud-native ETL and data pipeline tools. 
Strong understanding of data modeling principles, relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, Cassandra). 
  Ideally, youll also have   
 
Strong verbal and written communication, facilitation, relationship-building, presentation and negotiation skills. 
Be highly flexible, adaptable, and creative. 
Comfortable interacting with senior executives (within the firm and at the client)