Company:
Beghou
Website:
Visit Website
Business Type:
Consulting Firm
Company Type:
Service
Business Model:
Others
Funding Stage:
Bootstrapped
Industry:
IT Consulting
Salary Range:
₹ 12-16 Lacs PA
About the Company
Beghou Consulting brings
30+ years of expertise
in helping life sciences companies optimize commercialization through
strategic insight, advanced analytics, and innovative technology
.From building go-to-market strategies to deploying enterprise-grade data infrastructures and leveraging AI for deeper customer insights, Beghou empowers global life sciences organizations to maximize performance across their portfolios.We deploy both
proprietary
and
third-party
technology platforms to help clients forecast, design territories, manage customer and medical data, and execute analytics-driven planning with precision.
Mission
To combine analytical excellence with innovative technology to help life sciences companies navigate healthcare complexity and improve patient outcomes.
🔍 Role Overview
This role focuses on developing
front-end and back-end infrastructure
for Beghou’s in-house enterprise data platform. You will also support diverse client teams with data deployments, backend dataset creation, platform scaling, and analytics-focused engineering work. The role requires strong technical depth, independence, and collaboration with consulting and product teams.
🧩 Key Responsibilities
- Ensure reliability, stability, and performance of the enterprise data platform
- Demonstrate solid understanding of platform components: data warehouse, monitoring, languages, ETL design, schema design
- Build & enhance data integrations, pipelines, and analytics tools
- Document code, deployments, best practices, and system improvements
- Design & build ETL/ELT systems such as APIs, automated transfer pipelines, and code management solutions
- Serve as a point of contact for data engineering projects and collaborate with consulting, platform, and tech teams
- Participate in brainstorming, designing, and implementing solutions for complex data challenges
- Develop best practice frameworks for data management and advanced analytics pipelines
- Produce high-quality project deliverables, documentation, and presentations
- Handle additional responsibilities as assigned
🛠 Technical Requirements
Core Requirements
- 5+ years of experience in data engineering using Python, with strong expertise in pandas or PySpark
- Extensive, hands-on experience with Databricks, including:
- Installing packages
- Cluster configuration
- Job management
- User roles & permissions
- Unity Catalog
- Databricks APIs & AI tools
- Troubleshooting config issues
- Experience with relational databases: PostgreSQL, Oracle, MySQL, Redshift, Snowflake
- Strong grounding in software development fundamentals (Agile, Git, DevOps, testing, documentation)
Preferred Skills
- Experience configuring Azure AD/SAML/Okta/OAuth
- Knowledge of AWS or Azure security best practices
- Familiarity with ETL tools: Azure Data Factory, Informatica, SnapLogic, Boomi
- Experience with Docker, Kubernetes, AWS ECS
- At least 6 months of experience in web development using:
- Flask, Django
- JavaScript, AJAX
- HTML/CSS
- Data engineering certifications (Databricks, Google Cloud, Azure, AWS)
- Experience in the Life Sciences industry
- Proficiency in MS Excel, PowerPoint, Word
✔ Ideal Candidate
- Strong data engineering fundamentals
- Comfortable working independently and supporting cross-functional teams
- Excellent communication and documentation skills
- Strong problem-solving capabilities
- Ability to thrive in a consulting-oriented, client-focused environment