Senior Consultant

7 - 12 years

7 - 15 Lacs

Chennai, Tamil Nadu, India

Posted:1 day ago| Platform: Foundit logo

Apply

Skills Required

ETL/ELT Development API & Microservices Development Snowflake & DBT AWS Cloud Services

Work Mode

On-site

Job Type

Full Time

Job Description

Accountabilities As a Full Stack Data Engineer, you will: Develop, maintain, and optimize scalable data pipelines for data integration, transformation, and analysis, ensuring high performance and reliability. Demonstrate proficiency in ETL/ELT processes, including writing, testing, and maintaining high-quality code for data ingestion and transformation. Improve the efficiency and performance of data pipeline and workflows, applying advanced data engineering techniques and standard methodologies. Develop and maintain data models that represent data structure and relationships, ensuring alignment with business requirements and enhancing data usability. Develop APIs and microservices for seamless data integration across platforms and collaborate with software engineers to integrate front-end and back-end components with the data platform. Optimize and tune databases and queries for maximum performance and reliability and maintain existing data pipelines to improve performance and quality. Mentor other developers on standard methodologies, conduct peer programming, code reviews, and help evolve systems architecture to consistently improve development efficiency. Ensure compliance with data security and privacy regulations and implement data validation and cleansing techniques to maintain consistency. Stay updated with emerging technologies, standard methodologies in data engineering and software development, and contribute to all phases of the software development lifecycle (SDLC) processes. Work closely with data scientists, analysts, partners, and product managers to understand requirements, deliver high-quality data solutions, and support alignment of data sources and specifications. Perform unit testing, system integration testing, regression testing, and assist with user acceptance testing to ensure data solutions meet quality standards. Work with the QA Team to develop testing protocols and identify and correct challenges. Maintain clear documentation for Knowledge Base Articles (KBAs), data models, pipeline documentation, and deployment release notes. Diagnose and resolve complex issues related to data pipelines, backend services, and frontend applications, ensuring smooth operation and user satisfaction. Use and manage cloud-based services (e.g., AWS) for data storage and processing, and implement and manage CI/CD pipelines, version control, and deployment processes. Liaise with internal teams and third-party vendors to address application issues and project needs effectively. Create and maintain data visualizations and dashboards to provide actionable insights. Essential Skills/Experience Minimum 7+ years of experience in developing and delivering software engineering and data engineering solutions. Extensive experience with ELT/ETL tools such as SnapLogic, FiveTran, or similar. Deep expertise in Snowflake, DBT (Data Build Tool), and similar data warehousing technologies. Proficient in designing and optimizing data models and transformations for large-scale data systems. Strong knowledge of data pipeline principles, including dimensional modelling, schema design, and data integration patterns. Familiarity with Data Mesh and Data Product concepts, including experience in delivering and managing data products. Strong data orchestration skills to effectively manage and streamline data workflows and processes. Proficiency in data visualization technologies, with experience in advanced use of tools such as Power BI or similar. Solid understanding of DevOps practices, including CI/CD pipelines, version control systems like GitHub. Ability to implement and maintain automated deployment and integration processes. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Automated testing frameworks (Unit Test, system integration testing, regression testing & data testing). Strong proficiency in programming languages such as Python, Java, or Similar. Experience with both relational (e.g., MySQL, PostgreSQL) and NoSQL databases. Deep technical expertise in building software and analytical solutions with modern JavaScript stack (Node.js, ReactJS, AngularJS). Strong knowledge of cloud-based data, compute, and storage services, including AWS (S3, EC2, RDS, EBS, Lambda), orchestration services (e.g., Airflow, MWAA), containerization services (e.g., ECS, EKS). Excellent communication and interpersonal skills, with a proven ability of managing partner expectations, gathering requirements, translating them into technical solutions. Experience working in Agile development environments, with a strong understanding of Agile principles and practices. Ability to adapt to changing requirements and contribute to iterative development cycles. Advanced SQL skills for data analysis. Expertise on problem-solving skills with a focus on finding innovative solutions to complex data challenges. Strong analytical and reasoning skills, with the ability to visualize processes and outcomes. Strategic thinker with a focus on finding innovative solutions to complex data challenges. Desirable Skills/Experience Bachelors or Masters degree in health sciences, Life Sciences, Data Management, Information Technology or a related field or equivalent experience. Significant experience working in the pharmaceuticals industry with a deep understanding of industry-specific data requirements. Demonstrated ability to manage and collaborate with a diverse range of partners ensuring high levels of satisfaction and successful project delivery. Proven capability to work independently and thrive in a dynamic fast-paced environment managing multiple tasks adapting to evolving conditions. Experience working in large multinational organizations especially within pharmaceutical or similar environments demonstrating familiarity with global data systems processes. Certification in AWS Cloud or other relevant data engineering or software engineering certifications showcasing advanced knowledge technical proficiency. Awareness of use case specific GenAI tools available in the market and their application in day-to-day work scenarios. Working knowledge of basic prompting techniques and commitment to continuous improvement of these skills. Ability to stay up to date with developments in AI and GenAI, applying new insights to work-related situations.

Mock Interview

Practice Video Interview with JobPe AI

Start Etl/Elt Development Interview Now
Astrazeneca
Astrazeneca

Pharmaceutical Manufacturing

Cambridge Cambridgeshire

10001 Employees

48 Jobs

    Key People

  • Pascal Soriot

    Chief Executive Officer
  • Mene Pangalos

    Executive Vice President, BioPharmaceuticals R&D

RecommendedJobs for You

Andhra Pradesh, India

Gurgaon, Haryana, India

Chennai, Tamil Nadu, India

Chennai, Tamil Nadu, India

Gurgaon, Haryana, India

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru