Company Description:
JMAN Group is a fast-growing data engineering & data science consultancy. We work primarily with Private Equity Funds and their Portfolio Companies to create commercial value using Data & Artificial Intelligence. In addition, we also work with growth businesses, large corporates, multinationals, and charities.
We are headquartered in London with Offices in New York, London and Chennai. Our team of over 450 people is a unique blend of individuals with skills across commercial consulting, data science and software engineering.
We were founded by cousins Anush Newman (Co-founder & CEO) and Leo Valan (Co-founder & CTO) and have grown rapidly since 2019. In May 2023 we took a minority investment from Baird Capital and in January 2024 we opened an office in New York with the ambition of growing our US business to be as large as, if not bigger than, our European business by 2027
Why work at JMAN?
- Our vision is to ensure JMAN Group is the passport to our team’s future. We want our team to go on a fast-paced, high-growth journey with us – when our people want to do something else, the skills, training, exposure, and values that JMAN has instilled in them should open doors all over the world.
- Current Benefits:
− Competitive annual bonus
− Market-leading private health insurance
− Regular company socials
− Annual company away days
− Extensive training opportunities
TECHNOLOGY ARCHITECT
Technical specification
- Strong Experience in anyone of ETL/ELT or building Data Pipeline tools like
AWS Glue/Azure Data Factory/ Synapse/ Matillion/dbt.
- Ability to use GIT for version control and to maintain versions of data models.
- Hands - on experience in anyone of data warehouse/platforms like
Azure SQL Server/Redshift/Big Query/Databricks/Snowflake.
- To have strong hands-on experience to write
SQL
queries, database query optimization and stored procedures. - Familiarity in working with data visualization tools like
Power BI/Tableau/Looker.
- Should have ability to integrate with SaaS based CRM, Accounting and Financial systems (HubSpot, Salesforce, NetSuite, Zoho, etc)
- Should have experience in end-to-end deployment process from understanding business requirement/idea to implementation.
- Expertise in data solution architectures and the tools and techniques used for data management.
- Constructs and implements operational data store and Data marts.
- Strong proficiency in Python and PySpark.
- Knowledge of
SDLC, Agile
Scrum Methodologies. - Should have worked on a budgeting, Service cost and product migration from legacy to modernized application.
- Proficiency in SQL and experience working with relational databases like MySQL, PostgreSQL, etc.
- Experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB is a plus.
- Strong experience working with on-premises and Cloud-based databases and lakes.
- Proven experience as a Data Engineer with a focus on cloud platforms
(AWS/Azure/GCP
). - Experience in Full stack technology is a plus.
- Hands on experience in taking ownership on design patterns & best practices in design patterns, data models, determine implementation strategy.
- The role would require extensive internal interfacing including the CoE leadership, business units.
- Identify opportunities for automation and implement tools to streamline data engineering workflows.
- Ensure compliance with data governance policies, data privacy regulations, and industry best practices.
Responsibilities
- Manage and lead a team of data engineers, providing guidance, mentorship, and fostering a collaborative environment to maximize team performance.
- Your role will involve the diagnosis of existing architecture and data maturity and help the organization in identifying gaps and possible solutions.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- To comply with coding standards and ensure the creation of test cases/queries/validations for all developed features.
- Data governance – meta data management, data quality & etc.,
- Evaluate and recommend the Cloud data platform for customer needs with optimal solution.
- To provide guidance and monitoring the team on end-to-end operational process.
- Dimensional modelling & business domain – Convert data into business domain / entity using Dimensional modelling / Data Vault design pattern.
- You will oversee the development and maintenance of data pipelines, ensuring data quality, reliability, security, and scalability.
Competencies
- Ready to learn, adopt and implement state of the art evolving open-source technologies to provide path breaking business solutions to enhance top organisation globally.
- Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies.
- Promote a culture of continuous learning, sharing knowledge, and keep abreast of emerging data engineering technologies and trends, and recommend their adoption to improve efficiency and productivity.
- Design and develop project proposals, technology architecture, Delivery management and career development strategies.
- Strong analytical and critical thinking skills
- Excellent written and oral communication in English to collaborate with cross-functional teams.
PRIMARY SKILLSET:
ETL or ELT
: AWS Glue/ Azure Data Factory/ Synapse/ Matillion/ dbt (Anyone - Mandatory).Data Warehouse
: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory).Cloud Experience
: AWS/Azure/GCP (Anyone - Mandatory).Programming Language
: Apache Spark / Python.Data Patterns.
Data Modelling.
Product migration.
SECONDARY SKILLSET:
Data Visualization
: Power BI/Tableau/Looker. (Anyone – Good to have).Full stack
technologies (Good to have)