Jobs
Interviews

Agilisium Consulting

10 Job openings at Agilisium Consulting
Senior Data Engineer Hyderabad,Telangana,India 7 years Not disclosed On-site Full Time

Looking for 7+ years of experience Senior Data engineers/ Data Architects Location: Chennai/Hyderabad Notice Period: Immediate to 30 days (ONLY) Mandate Key skills: AWS, Databricks, Python, Pyspark, SQL 1. Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2. Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7. Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8. Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimising resource usage, setting up lifecycle policies, and automating cost alerts. 9. Data Security and Compliance : Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10. Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks. Show more Show less

Scrum Master Chennai,Tamil Nadu,India 10 years Not disclosed On-site Full Time

Scrum Master (Life Sciences – Commercial Domain) *10+ Years of Experience | Customer-Facing | Technical & Domain Expertise* Location : WFO-Chennai Experienced Scrum Master with 10+ years in Agile delivery, adept at managing high-pressure engagements with demanding clients in the Life Sciences (Commercial) domain. Proven ability to bridge technical teams and stakeholders, ensuring alignment on business goals while driving Agile best practices. Key Strengths: ✔ Strong Customer Liaison – Directly interfaces with tough clients, managing expectations, conflicts, and complex requirements. ✔ Deep Domain Knowledge – Expertise in Life Sciences Commercial Operations (e.g., CRM, Sales/Marketing Analytics, Patient Services, HCP Engagement). ✔ Technical Agility – Understands software development (SaaS, data platforms, AI/ML) to facilitate sprint planning, backlog refinement, and risk mitigation. ✔ Coaching & Influence – Guides teams in SAFe/Scrum, removes impediments, and fosters continuous improvement. ✔ Stakeholder Management – Balances business urgency with sustainable Agile practices, ensuring transparency and trust. Qualifications & Skills: ✅ Certified Scrum Master (CSM/PSM) – Mandatory ✅ 10+ years of hands-on Scrum Master experience in Agile software development. ✅Expertise in Jira, Miro, and Agile project management tools. ✅ Strong knowledge of Sprint metrics (Velocity, Burn-down/up charts). ✅ SAFe Agile (Scaled Agile Framework) knowledge is a plus, especially PI Planning. ✅ Excellent facilitation, coaching, and conflict-resolution skills. ✅ Experience onboarding & tracking resources, including external/non-AGS team members ✅ Strong communication & stakeholder management skills. Ideal for: Complex projects requiring a hands-on, assertive Scrum Master who can navigate technical challenges, domain complexities, and high-stakes client interactions. Show more Show less

Senior Data Engineer Chennai,Tamil Nadu,India 4 - 9 years Not disclosed On-site Full Time

Experience: 4 to 9 Years Required Skills Python SQL Pyspark AWS (Knowledge) Data-bricks (Good to have) Responsibilities Data Pipeline Development Data Integration and Transformation Performance Optimization Automation and Workflow Management Data Quality and Validation Cloud Platform Management Migration and Upgrades Cost Optimization Data Security and Compliance Collaboration and Support Life sciences/Pharma (Added advantage) Show more Show less

Senior Architect- Gen AI Chennai,Tamil Nadu,India 5 years None Not disclosed On-site Full Time

Key Responsibilities As a Data Architect with Generative AI expertise, you will: Design and implement robust data architectures that support AI and machine learning (ML) workloads, including Generative AI applications. Develop and optimize data pipelines for training, validating, and deploying AI models efficiently and securely. Integrate AI frameworks into data platforms, ensuring scalability, low latency, and high throughput. Collaborate with data scientists, AI engineers, and stakeholders to align data strategies with business goals. Lead initiatives to ensure data governance , security, and compliance standards (e.g., GDPR, CCPA) are met in AI-driven environments. Prototype and implement architectures that utilize generative models (e.g., GPT, Stable Diffusion) to enhance business processes. Stay up to date with the latest trends in Generative AI, data engineering, and cloud technologies to recommend and integrate innovations. Required Qualifications We’re looking for someone with: A bachelor’s degree in Computer Science, Data Engineering, or a related field (master’s preferred). 5+ years of experience in data architecture, with a focus on AI/ML-enabled systems. Hands-on experience with Generative AI models (e.g., OpenAI GPT, BERT, or similar), including fine-tuning and deployment. Proficiency in data engineering tools and frameworks , such as Apache Spark, Hadoop, and Kafka. Deep knowledge of database systems (SQL, NoSQL) and cloud platforms (AWS, Azure, GCP), including their AI/ML services (e.g., AWS Sagemaker, Azure ML, GCP Vertex AI). Strong understanding of data governance , MLOps , and AI model lifecycle management . Experience with programming languages such as Python or R and frameworks like TensorFlow or PyTorch . Excellent problem-solving and communication skills , with a demonstrated ability to lead cross-functional teams. Preferred Skills Familiarity with LLM fine-tuning , prompt engineering , and embedding models . Strong domain expertise in Life Science Industry Experience integrating generative AI solutions into production-level applications. Knowledge of vector databases (e.g., Pinecone, Weaviate) for storing and retrieving embeddings. Expertise in APIs for AI models , such as OpenAI API or Hugging Face Transformers.

Data Steward Chennai,Tamil Nadu,India 7 years None Not disclosed On-site Full Time

Job Title: Data Steward With Unity Catalog, Databricks Exp :7+yrs Location : Chennai /Hyderabad Mode: WFO · Perform data profiling and structural analysis to identify critical data elements, definitions, and usage patterns. · Develop and maintain comprehensive documentation including policies, standards, manuals, and process flows to support data governance. · Collaborate with business SMEs to define data domains, establish data products, and identify data stewards and governance artifacts. · Ensure all data management practices align with established data governance policies, procedures, and compliance requirements. · Contribute to the design, development, and deployment of data systems by combining technical expertise with hands-on implementation. · Define and enforce data standards in collaboration with stakeholders to optimize data collection, storage, access, and utilization. · Lead the implementation of data governance frameworks encompassing data quality, metadata management, and data lineage. · Drive data stewardship programs and provide expert guidance on governance best practices to business and technical teams. · Manage and operate data governance platforms such as Collibra, Infosphere, Erwin, and Unity Catalog. · Design and automate data quality metrics, dashboards, and reports to monitor governance effectiveness and support continuous improvement. Basic Qualifications · Bachelor’s degree in data science, Computer Science, Information Management, or related field. · Minimum 7 years of professional experience in Data Governance or related data management roles. · Strong knowledge of data governance frameworks, data quality management, and compliance standards. · Hands-on experience with Databricks and Unity Catalog. · Experience with governance tools such as Collibra, Infosphere, Erwin, or similar platforms. Preferred Qualifications · Certifications in data governance or data management (e.g., Certified Information Management Professional - CIMAP · Familiarity with data privacy regulations such as GDPR, CCPA. · Experience with AWS cloud services such as S3, Redshift etc. · Proficiency in SQL, Python, or other scripting languages for data analysis and automation. · Prior experience working in life science industry is a plus.

Data Architect Chennai,Tamil Nadu,India 0 years None Not disclosed On-site Full Time

Company Description Agilisium Consulting specializes in reimagining and co-developing AI-engineered business processes that are autonomous, scalable, and tailored for the Life Sciences industry. Recognized by leading independent analyst firms such as the Everest Group and ISG, and endorsed by consulting leaders like EY and technology giants such as AWS, Agilisium is known for its rigorous execution and continuous innovation. This commitment allows us to shape the future of the life sciences sector effectively. Role Description This is a full-time, on-site role for a Data Architect located in Chennai. The Data Architect will be responsible for designing and developing data architecture solutions, leading data modeling efforts, and ensuring effective data governance. Daily tasks include managing data warehousing, overseeing Extract Transform Load (ETL) processes, and collaborating with various teams to optimize data management practices. Qualifications Strong skills in Data Architecture and Data Modeling Experience with Data Governance Proficiency in Extract Transform Load (ETL) processes Expertise in Data Warehousing Excellent problem-solving and analytical skills Strong communication and teamwork abilities Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Experience in the Life Sciences industry is a plus

Senior Architect- Gen AI chennai,tamil nadu,india 5 years None Not disclosed On-site Full Time

Key Responsibilities As a Data Architect with Generative AI expertise, you will: Design and implement robust data architectures that support AI and machine learning (ML) workloads, including Generative AI applications. Develop and optimize data pipelines for training, validating, and deploying AI models efficiently and securely. Integrate AI frameworks into data platforms, ensuring scalability, low latency, and high throughput. Collaborate with data scientists, AI engineers, and stakeholders to align data strategies with business goals. Lead initiatives to ensure data governance , security, and compliance standards (e.g., GDPR, CCPA) are met in AI-driven environments. Prototype and implement architectures that utilize generative models (e.g., GPT, Stable Diffusion) to enhance business processes. Stay up to date with the latest trends in Generative AI, data engineering, and cloud technologies to recommend and integrate innovations. Required Qualifications We’re looking for someone with: A bachelor’s degree in Computer Science, Data Engineering, or a related field (master’s preferred). 5+ years of experience in data architecture, with a focus on AI/ML-enabled systems. Hands-on experience with Generative AI models (e.g., OpenAI GPT, BERT, or similar), including fine-tuning and deployment. Proficiency in data engineering tools and frameworks , such as Apache Spark, Hadoop, and Kafka. Deep knowledge of database systems (SQL, NoSQL) and cloud platforms (AWS, Azure, GCP), including their AI/ML services (e.g., AWS Sagemaker, Azure ML, GCP Vertex AI). Strong understanding of data governance , MLOps , and AI model lifecycle management . Experience with programming languages such as Python or R and frameworks like TensorFlow or PyTorch . Excellent problem-solving and communication skills , with a demonstrated ability to lead cross-functional teams. Preferred Skills Familiarity with LLM fine-tuning , prompt engineering , and embedding models . Strong domain expertise in Life Science Industry Experience integrating generative AI solutions into production-level applications. Knowledge of vector databases (e.g., Pinecone, Weaviate) for storing and retrieving embeddings. Expertise in APIs for AI models , such as OpenAI API or Hugging Face Transformers.

Delivery Manager- Life Sciences chennai,tamil nadu,india 18 years None Not disclosed On-site Full Time

We are seeking an experienced 18+ years - Delivery Manager with deep expertise in the Life Sciences (Pharma, Biotech) domain to lead the end-to-end delivery of IT projects, ensuring alignment with business goals, compliance, and innovation. The ideal candidate will manage cross-functional teams, oversee Agile/Waterfall delivery, and drive digital transformation in areas like R&D, Clinical Trials, Regulatory, Manufacturing, or Commercial Operations. Key Responsibilities -Project Delivery Leadership: Lead the planning, execution, and delivery of IT projects (e.g., SaaS, ERP, Data Analytics, Cloud, LIMS, CTMS, QMS) within the Life Sciences domain. Ensure projects meet scope, timeline, budget, and quality standards (GxP, FDA, HIPAA, GDPR compliance as needed). -Stakeholder & Vendor Management: Collaborate with business stakeholders (R&D, Manufacturing, QA, Commercial) to translate requirements into IT solutions. Manage third-party vendors, SaaS providers, and offshore teams. -Domain Expertise: Apply knowledge of Life Sciences processes (e.g., Clinical Development, Regulatory Submissions, Serialization, Supply Chain, Pharmacovigilance). Stay updated on industry trends (e.g., Digital Health, AI/ML in Drug Discovery, Real-World Evidence). -Risk & Compliance: Mitigate risks and ensure adherence to regulatory standards (21 CFR Part 11, ISO 13485, ICH-GCP). Drive validation (IQ/OQ/PQ) for regulated systems. -Team Leadership: Mentor teams (developers, analysts, QA) and foster Agile/DevOps practices. Optimize delivery processes using tools like JIRA, ServiceNow, or Azure DevOps. Qualifications Hands-on experience with systems like Veeva, Salesforce Health Cloud, SAP S/4HANA, LabVantage, or Medidata Rave. Certifications: PMP, CSM, SAFe, or ITIL preferred. Skills: Strong understanding of SDLC, Agile, and compliance frameworks. Excellent communication and stakeholder management. Preferred Attributes Experience with AI/ML, IoT, or Blockchain in Life Sciences. Knowledge of emerging tech (e.g., Federated Learning for Clinical Trials)

Data Engineer - AWS & Python vellore,tamil nadu 6 - 10 years INR Not disclosed On-site Full Time

As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and ETL workflows using AWS Glue, AWS Lambda, and Apache Spark. Your role will involve implementing big data processing solutions utilizing AWS EMR and AWS Redshift. You will also be tasked with developing and maintaining data lakes and data warehouses on AWS, including S3, Redshift, and RDS. Ensuring data quality, integrity, and governance will be a key aspect of your responsibilities, which will be achieved through leveraging AWS Glue Data Catalog and AWS Lake Formation. It will be essential for you to optimize data storage and processing for both performance and cost efficiency. Working with structured, semi-structured, and unstructured data across various storage formats such as Parquet, Avro, and JSON will be part of your daily tasks. Automation and orchestration of data workflows using AWS Step Functions and Apache Airflow will also fall within your scope of work. You will be expected to implement best practices for CI/CD pipelines in data engineering with AWS CodePipeline and AWS CodeBuild. Monitoring, troubleshooting, and optimizing data pipeline performance and scalability will be critical to ensuring smooth operations. Collaborating with cross-functional teams, including data scientists, analysts, and software engineers, will be necessary to drive successful outcomes. Your role will require a minimum of 6 years of experience in data engineering and big data processing. Proficiency in AWS cloud services like AWS Glue, AWS Lambda, AWS Redshift, AWS EMR, and S3 is paramount. Strong skills in Python for data engineering tasks, hands-on experience with Apache Spark and SQL, as well as knowledge of data modeling, schema design, and performance tuning are essential. Understanding AWS Lake Formation and Lakehouse principles, experience with version control using Git, and familiarity with CI/CD pipelines are also required. Knowledge of data security, compliance, and governance best practices is crucial. Experience with real-time streaming technologies such as Kafka and Kinesis will be an added advantage. Strong problem-solving, analytical, and communication skills are key attributes for success in this role.,

Presales Director chennai,tamil nadu,india 5 - 15 years None Not disclosed On-site Full Time

Role : Presales Consultant Experience : 5-15 years Location : Chennai/Hyderabad (Work From Office Only) Role Summary We are looking for dynamic and customer-focused individuals to join our Proactive Presales Team , a specialized unit within Agilisium that drives opportunity generation and solution discovery ahead of traditional RFP cycles. You will collaborate with Sales, Marketing, Delivery, and Technology teams to craft tailored, value-driven solutions that align with evolving customer challenges. Key Responsibilities Identify and analyze target accounts and proactively develop use-case-based solution propositions Partner with sales teams to co-create GTM strategies for specific industries or technology capabilities Translate customer challenges into solution frameworks, PoVs, and proposals before formal engagements Prepare solution blueprints, architecture diagrams, and effort estimations with delivery teams Lead customer discovery workshops and demos to articulate Agilisium’s capabilities and differentiators Own creation of reusable content: pitch decks, case studies, capability documents, and playbooks Continuously monitor industry trends and identify whitespace opportunities Support marketing with technical input for webinars, whitepapers, and thought leadership Ensure a seamless handover from presales to delivery teams for closed deals Must-Have Skills 5–15 years of experience in presales, solutions engineering, or business analysis in the data/cloud domain Strong understanding of Life sciences Domain and the value chain Understanding of Cloud platforms, Data engineering, analytics, and AI/ML services Proven ability to translate technical solutions into business value propositions Exceptional communication, storytelling, and presentation skills Experience in early-stage sales cycles or opportunity incubation Ability to work cross-functionally in a fast-paced, dynamic environment Good-to-Have Skills Bio Tech Background / Exposure to Life sciences Industry Preferable Tech Certifications (e.g., Solutions Architect – Associate/Professional) Exposure to Life sciences-specific use cases (e.g., R&D, Commercial, Clinical) Experience in giving presentations to customer stakeholders Familiarity with sales tools like Salesforce, HubSpot, or LinkedIn Sales Navigator Why Join Us? Be part of a high-impact team influencing pipeline creation and strategic solutioning Work with top-tier Domain SMEs and Cloud / Data engineering experts Accelerate your career with leadership exposure and ownership of key initiatives Continuous learning opportunities