Note: Only candidates with 5+ years of relevant full stack development experience will be considered for this role. We are looking for a Senior Full Stack Engineer to lead the design and development of innovative digital products across frontend and backend. This role is ideal for a highly skilled engineer with strong technical expertise and the ability to architect, build, and deliver end-to-end solutions. You'll work on diverse projects including custom applications, web platforms, and integrated tools independently or as part of a collaborative team. What you'll do Design, develop, and maintain full stack web applications across multiple business domains Define system architecture and integration strategies in collaboration with solution architects and leads Build scalable APIs, robust databases, and efficient server-side logic Develop dynamic, responsive UIs and ensure seamless client-side experiences Optimize applications for performance, security, and scalability Drive best practices in code quality, testing, documentation, and deployment Mentor junior developers and help shape internal engineering standards Lead projects from planning through deployment in Agile sprints using JIRA What you bring 5+ years of full stack development experience, ideally in fast-paced product or agency environments Advanced knowledge of HTML, CSS, JavaScript with expertise in Vue.js (React or Angular also valued) Strong backend experience with PHP (Laravel) and Node.js Deep understanding of REST APIs, authentication, relational databases, and scalable architectures Skilled in Tailwind CSS, SASS, frontend optimization, and Git-based workflows Familiarity with CI/CD pipelines, automated testing, debugging, and Agile delivery Strong portfolio showcasing full lifecycle ownership of complex applications Proven ability to lead technical decisions and mentor engineering teams Nice to Have Exposure to cloud infrastructure, DevOps, and containerization (Docker)
Profile - Data Engineer Lead Exp- 7+ Years Location - Noida Work Mode - Hybrid Skills - Data Engineering , Team Lead , Snowflake , AWS/Azure , Databricks , DBT , Data Warehousing for applying you can share your CV on yogendra.jsohi@vservit.com Key Responsibilities Data Engineering & Pipeline Development: · Develop and maintain robust, scalable ETL pipelines to ingest, clean, and transform public (e.g., CMS, Medicaid), private (e.g., commercial claims, EHRs), and purchased (e.g., Clarivate, Doceree) datasets. Optimize data workflows and storage solutions to enhance data accessibility and usability for analytics and business intelligence teams. Implement data quality frameworks, ensuring the integrity and accuracy of healthcare datasets used for decision-making. Collaborate with data architects to design and enhance data models, ensuring efficient query performance and integration across multiple sources. Collaboration with External Vendors & Partners: Work closely with external data vendors, research organizations, and commercial agencies to define projects, ensure data consistency, and improve ingestion processes. Provide technical oversight on vendor-sourced data projects, ensuring compliance with Avalere’s data quality and governance standards. Support the evaluation and acquisition of new datasets, integrating them into Avalere’s data ecosystem for analytics and modeling. Internal Team Leadership & Support: Partner with data scientists, analysts, and market access teams to develop data solutions that drive business insights. Provide technical mentorship to junior data engineers, ensuring best practices in data pipeline development, automation, and cloud-based infrastructure. Work cross-functionally with data architects and analytics teams to align data engineering efforts with business and research objectives. Required Skills and Experience Education: Bachelor’s degree in computer science, information systems, data engineering, or a related field. Master’s degree preferred. Technical Expertise: Extensive experience in building and managing ETL pipelines for large-scale healthcare data processing. Strong knowledge of public healthcare datasets, including Medicare/Medicaid claims, state health registries, and government-funded research data. Experience with private-sector datasets, such as commercial claims, EHRs, and provider-generated data. Proficiency in SQL, Python, and Spark for large-scale data processing and transformation. Hands-on experience with cloud platforms (AWS, Azure, GCP) and big data technologies (Databricks, Snowflake, Hadoop). Deep understanding of data modeling, data warehousing, and governance best practices. Experience implementing data security and compliance frameworks, including HIPAA, HITRUST, and other healthcare regulations. Professional Experience: 8+ years of experience in data engineering, pipeline development, or a related field, preferably within healthcare, consulting, or analytics. Demonstrated ability to integrate, process, and optimize multi-source datasets for analytics and research applications. Experience managing data acquisition projects and working with external vendors to refine data ingestion approaches. Proven track record in developing scalable cloud-based data solutions for enterprise-wide use.