Jobs
Interviews

8586 Data Modeling Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Position Overview : We are seeking an experienced NLP & LLM Specialist to join our team. The ideal candidate will have deep expertise in working with transformer-based models, including GPT, BERT, T5, RoBERTa, and similar models. This role requires experience in fine-tuning these pre-trained models on domain-specific tasks, as well as crafting and optimizing prompts for natural language processing tasks such as text generation, summarization, question answering, classification, and translation. The candidate should be proficient in Python and familiar with NLP libraries like Hugging Face, SpaCy, and NLTK, with a solid understanding of model evaluation metrics. Roles and Responsibilities : - Model Expertise : Work with transformer models such as GPT, BERT, T5, RoBERTa, and others for a variety of NLP tasks, including text generation, summarization, classification, and translation. - Model Fine-Tuning : Fine-tune pre-trained models on domain-specific datasets to improve performance for specific applications such as summarization, text generation, and question answering. - Prompt Engineering : Craft clear, concise, and contextually relevant prompts to guide transformer-based models towards generating desired outputs for specific tasks. - Iterate on prompts to optimize model performance. - Instruction-Based Prompting : Implement instruction-based prompting to guide the model toward achieving specific goals, ensuring that the outputs are contextually accurate and aligned with task objectives. - Zero-shot, Few-shot, Many-shot Learning : Utilize zero-shot, few-shot, and many-shot learning techniques to improve model performance without the need for full retraining. - Chain-of-Thought (CoT) Prompting : Implement Chain-of-Thought (CoT) prompting to guide models through complex reasoning tasks, ensuring that the outputs are logically structured and provide step-by-step explanations. - Model Evaluation : Use evaluation metrics such as BLEU, ROUGE, and other relevant metrics to assess and improve the performance of models for various NLP tasks. - Model Deployment : Support the deployment of trained models into production environments and integrate them into existing systems for real-time applications. - Bias Awareness : Be aware of and mitigate issues related to bias, hallucinations, and knowledge cutoffs in LLMs, ensuring high-quality and reliable outputs. - Collaboration : Collaborate with cross-functional teams including engineers, data scientists, and product managers to deliver efficient and scalable NLP solutions. Must Have Skill : - Overall 7 years with at least 5+ years of experience working with transformer-based models and NLP tasks, with a focus on text generation, summarization, question answering, classification, and similar tasks. - Expertise in transformer models like GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), T5 (Text-to-Text Transfer Transformer), RoBERTa, and similar models. - Familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text. - Experience in fine-tuning pre-trained models on domain-specific datasets for tasks such as text generation, summarization, question answering, classification, and translation. - Familiarity with concepts like attention mechanisms, context windows, tokenization, and embedding layers. - Awareness of biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality. - Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs. - Experience in instruction-based prompting. - Use of zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without retraining. - Experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results. - Crafting prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs. - Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns. - Proficiency in Python and experience with NLP libraries (e.g., Hugging Face, SpaCy, NLTK). - Experience with transformer-based models (e.g., GPT, BERT, T5) for text generation tasks. - Experience in training, fine-tuning, and deploying machine learning models in an NLP context. - Understanding of model evaluation metrics (e.g., BLEU, ROUGE). Qualification : - BE/B.Tech or Equivalent degree in Computer Science or related field. - Excellent communication skills in English, both verbal and written.

Posted 1 week ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Mumbai

Remote

Travel Requirement : will be plus if willing to travel to the UK as needed Job Description : We are seeking a highly experienced Senior Data Engineer with a background in Microsoft Fabric and have done projects in it. This is a remote position based in India, ideal for professionals who are open to occasional travel to the UK and must possess a valid passport. Key Responsibilities : - Design and implement scalable data solutions using Microsoft Fabric - Lead complex data integration, transformation, and migration projects - Collaborate with global teams to deliver end-to-end data pipelines and architecture - Optimize performance of data systems and troubleshoot issues proactively - Ensure data governance, security, and compliance with industry best practices Required Skills and Experience : - 5+ years of experience in data engineering, including architecture and development - Expertise in Microsoft Fabric, Data Lake, Azure Data Services, and related technologies - Experience in SQL, data modeling, and data pipeline development - Knowledge of modern data platforms and big data technologies - Excellent communication and leadership skills Preferred Qualifications : - Good communication skills - Understanding of data governance and security best practices Perks & Benefits : - Work-from-home flexibility - Competitive salary and perks - Opportunities for international exposure - Collaborative and inclusive work culture

Posted 1 week ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Key Responsibilities : - Data Model & Transformation : Develop a deep understanding of JSON data models, writing complex queries and managing data transformation processes to enable robust analytics and reporting. - Data Analysis & Visualization : Actively analyze, visualize, and provide insightful analytics on data to build comprehensive reporting solutions that support various company initiatives. - BI & Data Warehousing Development : Participate in the ongoing development and enhancement of the Business Intelligence (BI) and Data Warehousing functions within the wider organization. - Dashboard Development : Build rich and dynamic dashboards using out-of-the-box features, custom solutions, and advanced visualizations leveraging D3.js, Angular.js, or equivalent technologies. - BI Standards & Best Practices : Contribute to the creation and support of BI development standards and best practices to ensure consistency and quality across solutions. - Technology Exploration : Explore and recommend emerging technologies and techniques to support and enhance existing BI landscape components and capabilities. Required Skills & Qualifications : - At least 2 years of experience in the field of data visualization. - Demonstrable skills in building responsive user interfaces and data visualizations using Angular.js and D3.js. - Strong web development experience, including JavaScript, CSS, HTML, and general visualization principles. - Proficiency in Python or other scripting languages for data manipulation and backend processes. - At least 6 months of experience with Oracle RDBMS (SQL/PLSQL) or MySQL. - Strong analytical and problem-solving skills, with an ability to understand and interpret complex data sets. - Excellent communication skills, both verbal and written, for collaborating with technical and non-technical stakeholders. - Bachelor's in Computer Science, Business, Business Administration, or a closely-related degree, or foreign equivalent.

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Duration : 6 Months Timings : General IST Notice Period : within 15 days or immediate joiner About The Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round). Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality And Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the and Communication - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business Collaboration - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Job Description: KPI Partners is seeking an experienced Senior Snowflake Administrator to join our dynamic team. In this role, you will be responsible for managing and optimizing our Snowflake environment to ensure performance, reliability, and scalability. Your expertise will contribute to designing and implementing best practices to facilitate efficient data warehousing solutions. Key Responsibilities: - Administer and manage the Snowflake platform, ensuring optimal performance and security. - Monitor system performance, troubleshoot issues, and implement necessary solutions. - Collaborate with data architects and engineers to design data models and optimal ETL processes. - Conduct regular backups and recovery procedures to protect data integrity. - Implement user access controls and security measures to safeguard data. - Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. - Participate in the planning and execution of data migration to Snowflake. - Provide support for data governance and compliance initiatives. - Stay updated with Snowflake features and best practices, and provide recommendations for continuous improvement. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5+ years of experience in database administration, with a strong focus on Snowflake. - Hands-on experience with SnowSQL, SQL, and data modeling. - Familiarity with data ingestion tools and ETL processes. - Strong problem-solving skills and the ability to work independently. - Excellent communication skills and the ability to collaborate with technical and non-technical stakeholders. - Relevant certifications in Snowflake or cloud data warehousing are a plus. If you are a proactive, detail-oriented professional with a passion for data and experience in Snowflake administration, we would love to hear from you. Join KPI Partners and be part of a team that is dedicated to delivering exceptional data solutions for our clients.

Posted 1 week ago

Apply

4.0 - 7.0 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job description Location: Kumbalgodu, Kengeri, Bangalore (Onsite) Type: Full-time | Monday to Saturday Experience: 3+ years in ERP Implementation We at Girish Exports are transitioning from a legacy ERP (Visual Gems) to a custom-built system on Zoho Creator . We're looking for a practical, hands-on ERP Implementation Lead who understands real-world operations and knows how to bring tech and people together. What Youll Do: Lead the planning and rollout of our ERP system across departments Work closely with developers and business users to map operations into usable system workflows Design modular data flows that connect upstream and downstream processes Collaborate with department heads to drive adoption and coordinate training plans Ensure the ERP system supports teams like merchandising, production, stores, finance, HR, and maintenance Identify bottlenecks, simplify processes, and make sure solutions work in the real world , not just on paper Occasional travel will be required factory units expenses will be fully covered by the company You Should Have: 3+ years of ERP implementation experience in complex, real-world setups Mandatory hands-on experience with Zoho Creator Strong understanding of operational workflows, data architecture, and process mapping Ability to work with non-tech users (shop floor, stores, admin) and ensure smooth adoption Excellent communication and cross-functional collaboration skills A mindset focused on outcomes, not just systems Why Join Us? If you're excited by the idea of driving real change and making a tangible impact on day-to-day operations, this is the role for you. You'll help shape a custom-built ERP system from the ground up and if using data-driven insights to improve how things actually work on the ground excites you, you'll thrive here.

Posted 1 week ago

Apply

8.0 - 13.0 years

19 - 20 Lacs

Chennai

Work from Office

Job Overview: We are seeking an experienced MicroStrategy (MSTR) Consultant with 8+ years of expertise to support our offshore operations. The ideal candidate should have deep knowledge of metadata, data modeling, dashboard development, and performance optimization. This role requires strong troubleshooting skills and the ability to collaborate with business users to answer technical queries. Role & responsibilities • Develop, configure, and maintain MicroStrategy solutions based on business needs. • Understand and manage metadata, ensuring efficient data model configurations. • Work with star schemas, slowly changing dimensions (SCDs), and hierarchical models to optimize reporting. • Design and create interactive dashboards tailored for various business requirements. • Configure VLDB settings to fine-tune report performance. • Establish parent-child relationships between attributes and configure level metrics. • Develop and manage Documents and Dossiers for data visualization and reporting. • Monitor and troubleshoot MSTR infrastructure issues, including certificate renewals, Catalina logs, and service restarts. • Collaborate with business teams to understand reporting needs and provide technical guidance. • Identify performance bottlenecks and optimize reports for faster execution. • Power BI knowledge is a plus, as it may be required for certain integrations.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled ETL Tester with hands-on experience in SQL and Python to join our Quality Engineering team. The ideal candidate will be responsible for validating data pipelines, ensuring data quality, and supporting the end-to-end ETL testing lifecycle in a fast-paced environment. Design, develop, and execute test cases for ETL workflows and data pipelines. Perform data validation and reconciliation using advanced SQL queries. Use Python for automation of test scripts, data comparison, and validation tasks. Work closely with Data Engineers and Business Analysts to understand data transformations and business logic. Perform root cause analysis of data discrepancies and report defects in a timely manner. Validate data across source systems, staging, and target data stores (e.g., Data Lakes, Data Warehouses). Participate in Agile ceremonies, including sprint planning and daily stand-ups. Maintain test documentation including test plans, test cases, and test results. Required qualifications to be successful in this role: 5+ years of experience in ETL/Data Warehouse testing. Strong proficiency in SQL (joins, aggregations, window functions, etc.). Experience in Python scripting for test automation and data validation. Hands-on experience with tools like Informatica, Talend, Apache NiFi, or similar ETL tools. Understanding of data models, data marts, and star/snowflake schemas. Familiarity with test management and bug tracking tools (e.g., JIRA, HP ALM). Strong analytical, debugging, and problem-solving skills. Good to Have: Exposure to Big Data technologies (e.g., Hadoop, Hive, Spark). Experience with Cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of CI/CD tools and automated data testing frameworks. Experience working in Agile/Scrum teams. Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamone of the largest IT and business consulting services firms in the world.,

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Position: Senior Data Scientist Experience: 5+ years Location: Chennai, Hyderabad, Bangalore Work Mode : 5 days work from office Role Summary We are looking for a seasoned Data Scientist to lead the development of data-driven solutions that drive strategic decision-making. You will work closely with business stakeholders, engineers, and analysts to uncover insights, build predictive models, and deploy scalable data products. Key Responsibilities Analyze large, complex datasets to extract actionable insights and trends. Design and implement machine learning models and statistical algorithms. Collaborate with cross-functional teams to define data strategies and KPIs. Develop data pipelines and automate data collection and preprocessing. Communicate findings through compelling visualizations and presentations. Mentor junior data scientists and contribute to best practices in data science. Required Skills Proficiency in Python, R, and SQL. Strong grasp of statistics, machine learning, and data modeling techniques. Experience with tools like Scikit-learn, TensorFlow, PyTorch, or Spark. Familiarity with cloud platforms (AWS, Azure, or GCP). Expertise in data visualization tools (e.g., Tableau, Power BI, Plotly). Ability to translate business problems into analytical solutions

Posted 1 week ago

Apply

12.0 - 15.0 years

20 - 25 Lacs

Hyderabad

Work from Office

The Data Privacy and Policy Lead will be responsible for defining, operationalizing, and monitoring data access controls to ensure only authorized users and AI agents have access to sensitive data. He/she will define frameworks to identify sensitive data across the enterprise and will work closely with the Data Governance team to ensure all assets are properly classified. He/She will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to build secure data foundations supporting the vision of democratizing data with proper security controls. The Data Privacy and Policy Lead will partner with the Enterprise Data Office, senior leadership, data governance functional leads, and Chief Privacy Officer to protect sensitive data. He/she will establish and enforce data access controls and policies to accelerate access to data in a secure scalable environment. Roles & Responsibilities: Responsible for co-developing frameworks, in partnership with Enterprise Data Management Platforms team, to enable technology to discover, tag, and generate metadata to manage data access controls. Manage a team of Data Governance Specialists and Data Stewards- directly or in a matrix organization structure. Responsible for operationalizing the data access controls and in partnership with functional data owners and technology teams, ensure data access controls, compliance with privacy and security regulations are enforced. Maintain policies and ensures compliance with data privacy, security, and regulatory policies Publish metrics to measure effectiveness and drive adoption of Data Access policies and standards, that will be applied to mitigate identified risks across the data lifecycle (e.g., capture / production, aggregation / processing, sharing, reporting / consumption). Functional Skills: Must-Have Skills: Technical data management skills with in-depth knowledge of Pharma data regulations. Aware of industry trends and priorities and can apply to governance and policies. In-depth knowledge and experience with data masking, data access controls, and technologies to enable a scalable operating model. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Experience managing industry external data assets (e.g. Claims, EHR, etc.) Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: 12 to 15 years of Information Systems experience 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs.

Posted 1 week ago

Apply

3.0 - 5.0 years

20 - 25 Lacs

Hyderabad

Work from Office

As part of the cybersecurity organization, In this vital role you will be responsible for designing, building, and maintaining data infrastructure to support data-driven decision-making. This role involves working with large datasets, developing reports, executing data governance initiatives, and ensuring data is accessible, reliable, and efficiently managed. The role sits at the intersection of data infrastructure and business insight delivery, requiring the Data Engineer to design and build robust data pipelines while also translating data into meaningful visualizations for stakeholders across the organization. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture, ETL processes, and cybersecurity data frameworks. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Develop and maintain interactive dashboards and reports using tools like Tableau, ensuring data accuracy and usability Schedule and manage workflows the ensure pipelines run on schedule and are monitored for failures. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. Collaborate with data scientists to develop pipelines that meet dynamic business needs. Share and discuss findings with team members practicing SAFe Agile delivery model. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The Data engineer professional we seek is one with these qualifications. Basic Qualifications: Master s degree and 1 to 3 years of experience of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Hands on experience with data practices, technologies, and platforms, such as Databricks, Python, GitLab, LucidChart, etc. Hands-on experience with data visualization and dashboarding tools Tableau, Power BI, or similar is a plus Proficiency in data analysis tools (e.g. SQL) and experience with data sourcing tools Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data governance frameworks, tools, and best practices Knowledge of and experience with data standards (FAIR) and protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, cloud data platforms Experience working in Product teams environment Experience working in an Agile environment Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements, and estimating efforts Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals

Posted 1 week ago

Apply

4.0 - 6.0 years

10 - 14 Lacs

Hyderabad

Work from Office

In this role, you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industr y , demonstrates strong technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: D esign, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manag e scope, timelines, and risks Develop and maintain data models for biopharma scientific data , data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs , Software Engineers and Data Scientists to design and develop end-to-end data pipeline s to meet fast paced business need s across geographic regions Identify and resolve [ complex ] data-related challenges Adhere to best practices for coding, testing , and designing reusable code/component E xplore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions Basic Qualifications and Experience: Doctorate Degree OR Master s degree with 4 - 6 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field Preferred Qualifications and Experience: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks ( pytest ), and scripting tasks Hands on experience with big data technologies and platforms , such as Databricks, Apache Spark ( PySpark , SparkSQL ) , workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS ( e.g. Oracle, MySQL , SQL server , Postgre SQL ) Knowledge of cloud data platforms (AWS preferred) E xperience with data visualization tools (e . g. Dash, Plotly , Spotfire ) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence U nderstanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation . Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 1 week ago

Apply

5.0 - 9.0 years

2 - 6 Lacs

Hyderabad

Work from Office

We are looking for an experienced SAP Master Data Governance (MDG) Techno-functional to co-design and drive the implementation of SAP Master Data Governance solutions. In this role, you will architect scalable, innovative systems, provide expert technical guidance, configuration, development and maintenance that align with Amgens strategic objectives. You will collaborate closely with the MDG Product Owner, Technical, and other SAP S/4 Functional and technical architects and other functional MDG teams to implement, enhance and optimize MDG Master data replications and Integrations, ensuring SAP MDG delivers maximum value across the organization. Roles & Responsibilities: Collaborate with business collaborators to understand data governance requirements and translate them into effective MDG solutions. Configure, and implement SAP MDG solutions for MDG -Material or Business Partner or Finance. Provide technical leadership and guidance to development teams, ensuring alignment to best practices and standards. Configure and customize SAP MDG on SAP S/4 Hana accordance with the MDG strategy. Develop and maintain data models, workflows, and business rules within the MDG framework. Collaborate with multi-functional teams to integrate MDG with other SAP modules and external systems. Ensure compliance with data governance policies and standards. Participate in project planning, estimation, and risk assessment. Mentor junior team members and contribute to knowledge sharing. Create comprehensive technical documentation, including design specifications, architecture diagrams, and user guides. Conduct training sessions for key partners and end-users as needed. Follow Agile software development methods to design, build, implement, and deploy. Functional Skills: Must-Have Skills: Experience in atleast 2 SAP MDG Implementation. Experience with atleast 2 of the MDG Data Models and preferably custom data models Functional understanding of SAP Master Data and MDG Out of the box solution. Technical expertise to build and develop workflows, validations, replication, etc. Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical collaborators. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams Basic Qualifications: 5 to 9 years of Business, Engineering, IT or related field experience Expertise in the implementation of SAP MDG solution (configuration, design, build, test and deploy) Deep understanding on key SAP MDG concepts Data Modeling, UI Modelling, Process Modelling, Governance Process, Mass Processing, DRF, DIF, BRF+ and Consolidation Features + DQM. Experience in configuring rule-based Workflows (serial, parallel and combination) and User interface modelling. EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request and accommodation. .

Posted 1 week ago

Apply

3.0 - 5.0 years

2 - 6 Lacs

Hyderabad

Work from Office

This Sr Associate Business Analyst will play a pivotal role in optimizing our organization s technical environment through data modeling, visualization, and strategic analysis. This position will focus on designing and delivering data-driven insights to support the Technology Rationalization Team, ensuring informed decision-making and prioritization of efforts. You will work on refining business processes, developing data models, and creating visual reports that guide leadership in optimizing IT investments. Additionally, you will contribute to building a sustainable service model for technology rationalization, ensuring long-term efficiency and cost-effectiveness. Roles & Responsibilities: Develop and maintain data models to support the Technology Rationalization Team in optimizing IT assets. Design and create interactive dashboards and visual reports to communicate insights effectively. Ensure data accuracy, consistency, and integrity across multiple sources. Provide senior leadership with data-driven insights to prioritize IT rationalization efforts. Identify opportunities to improve data collection, processing, and reporting workflows. Support the implementation of best practices in data governance and management. Work closely with technology teams to provide data-backed recommendations for IT asset optimization. Assist in building a scalable and sustainable service model for technology rationalization. Functional Skills: Must-Have Skills (Not more than 3 to 4): Data Modeling & Management - Strong ability to create, maintain, and optimize data models. Data Visualization - Proficiency in tools like Power BI, Tableau, Excel, or similar platforms. Stakeholder Collaboration - Experience working with cross-functional teams to align on data-driven priorities. Good-to-Have Skills: SQL & Database Knowledge - Experience working with relational databases, querying data, and optimizing datasets. Business Process Analysis - Ability to assess and refine business processes for efficiency. Data Storytelling - Capability to translate complex data into actionable insights for leadership. Experience with ServiceNow, especially CMDB, Common Service Data Model (CSDM) and IT Service Management. Experience working in SAFe and/or Agile Teams. Experience with process development/engineering. Professional Certifications : SAFe for Teams certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Able to work under minimal supervision Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Basic Qualifications: Master s degree and 1 to 3 years of Information Security or IT OR Bachelor s degree and 3 to 5 years of Information Security or IT OR Diploma and 7 to 9 years of Information Security or IT experience

Posted 1 week ago

Apply

5.0 - 9.0 years

8 - 12 Lacs

Hyderabad

Work from Office

The External Data Analyst will be responsible for optimizing spend and reuse of external data. This role is responsible for maintaining a data catalog with harmonized metadata across functions to increase visibility, promote reuse, and lower the annual spend. The External Data Analyst will assess investments in external data and will provide recommendations to the Enterprise Data Council to inform investment approval. This role will work with Global Strategic Sourcing and the Cyber Security Team to standardize contracting of data purchases. The External Data Analyst will also work closely with the data engineering team and external data providers to manage the lifecycle of the data assets. This role will be responsible for co-defining and operationalizing the business process to capture metadata related to the forecast of data purchases. The person in this role will coordinate activities at the tactical level, interpreting Enterprise Data Council direction and defining operational level impact deliverables and actions to maximize data investments. Roles & Responsibilities: Responsible for cataloging all external data assets, including the harmonization of metadata to increase reuse and inform future data acquisitions. Co-develop and maintain the process to consistently capture external data purchase forecast, focusing on generating the required metadata to support KPIs and reporting. Responsible for working with Global Strategic Sourcing and Cyber Security teams to standardize data contracts to enable the reuse of data assets across functions. In partnership with functional data SMEs, develop internal expertise on the content of external data to increase reuse across teams. This includes, but is not limited to, participating in data seminars to bring together data SMEs from all functions to increase data literacy. In partnership with the Data Engineering team, design data standardization rules to make external data FAIR from the start. Maintain the quality of data. In partnership with the Data Privacy and Policy team develop and operationalize data access controls to adhere to the terms of the data contracts to ensure data access controls, compliance, and security requirements are enforced. Maintain policies and ensure compliance with data privacy, security, and contractual policies Publish metrics to measure effectiveness of data reuse, data literacy and reduction in data spend. Functional Skills: Must-Have Skills: Experience managing external data assets used in the life-science industry (e.g., Claims, EHR, etc.) Experience working with data providers, supporting negotiations and vendor management activities. Technical data management skills with in-depth knowledge of Pharma data standards and regulations. Aware of industry trends and priorities and can apply to governance and policies. Experience with data products development life cycle, including the enablement of data dictionaries, business glossary to increase data products reusability and data literacy. Good-to-Have Skills: Ability to successfully execute complex projects in a fast-paced environment and in managing multiple priorities effectively. Ability to manage projects or departmental budgets. Experience with modelling tools (e.g., Visio). Basic programming skills, experience in data visualization and data modeling tools. Experience working with agile development methodologies such as Scaled Agile. Soft Skills: Ability to build business relationships and understand end-to-end data use and needs. Excellent interpersonal skills (team player). People management skills either in matrix or direct line function. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Good presentation and public speaking skills. Strong attention to detail, quality, time management and customer focus. Basic Qualifications: Any degree with 5 - 9 years of experience in Business, Engineering, IT or related field

Posted 1 week ago

Apply

9.0 - 13.0 years

13 - 17 Lacs

Hyderabad

Work from Office

We are seeking an experienced BI Architect with expertise in Databricks, Spotfire (Tableau and Power BI secondary), AWS, and enterprise business intelligence (BI) solutions to design and implement scalable, high-performance BI architectures. This role will focus on data modeling, visualization, governance, self-service BI enablement, and cloud-based BI solutions, ensuring efficient, data-driven decision-making across the organization. The ideal candidate will have strong expertise in BI strategy, data engineering, data warehousing, semantic layer modeling, dashboarding, and performance optimization, working closely with data engineers, business stakeholders, and leadership to drive BI adoption and enterprise analytics excellence. Preferred Candidate would have extensive Spotfire experience followed by Power BI or Tableau. Roles & Responsibilities: Design and develop enterprise BI architectures and implement the architectural vision for TIBCO Spotfire at the enterprise level hosted in AWS Partner with data engineers and architects to ensure optimal data modeling, caching, and query performance in Spotfire Design scalable, secure, and high-performance Spotfire environments, including multi-node server setups and hybrid cloud integrations. Develop reusable frameworks and templates for dashboards, data models, and automation processes. Optimize BI query performance, indexing, partitioning, caching, and report rendering to enhance dashboard responsiveness and data refresh speed. Implement real-time and batch data integration strategies, ensuring smooth data flow from APIs, ERP/CRM systems (SAP, Salesforce, Dynamics 365), cloud storage, and third-party data sources into BI solutions. Establish and enforce BI governance best practices, including data cataloging, metadata management, access control, data lineage tracking, and compliance standards. Troubleshoot interactive dashboards, paginated reports, and embedded analytics solutions that deliver actionable insights. Implement DataOps and CI/CD pipelines for BI, leveraging Deployment Pipelines, Git integration, and Infrastructure as Code (IaC) to enable version control and automation. Stay up to date with emerging BI technologies, cloud analytics trends, and AI/ML-powered BI solutions to drive innovation. Collaborate with business leaders, data analysts, and engineering teams to ensure BI adoption, self-service analytics enablement, and business-aligned KPIs. Provide mentorship and training to BI developers, analysts, and business teams, fostering a data-driven culture across the enterprise. Must-Have Skills: Experience in BI architecture, data analytics, AWS, and enterprise BI solution development Strong expertise in Spotfire including information links, Spotfire Analyst, Spotfire Server, and Spotfire Web Player Hands-on experience with Databricks (Apache Spark, Delta Lake, SQL, PySpark) for data processing, transformation, and analytics. Experience in scripting and extensions Python or R Expertise in BI strategy, KPI standardization, and enterprise data modeling, including dimensional modeling, star schema, and data virtualization. Hands-on experience with cloud BI solutions and enterprise data warehouses, such as Azure Synapse, AWS Redshift, Snowflake, Google BigQuery, or SQL Server Analysis Services (SSAS). Experience with BI governance, access control, metadata management, data lineage, and regulatory compliance frameworks. Expertise in Agile BI development, Scaled Agile (SAFe), DevOps for BI, and CI/CD practices for BI deployments. Ability to collaborate with C-level executives, business units, and engineering teams to drive BI adoption and data-driven decision-making. Good-to-Have Skills: Experience with Tibco Spotfire Lead Discovery Knowledge of AI-powered BI, natural language processing (NLP) in BI, and automated machine learning (AutoML) for analytics. Experience with multi-cloud BI architectures and federated query solutions using Power BI Tableau. Understanding of GraphQL, REST APIs, and data mesh principles for enterprise data access in BI. Knowledge of AI/ML pipeline integration within enterprise data architectures. Education and Professional Certifications Bachelor s degree with 9-13 years of experience in Computer Science, IT or related field Tibco Spotfire Certifications Power BI Certifications Tableau Certifications Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Hyderabad

Work from Office

We are seeking an experienced MDM Data Analyst with 5-8 years of experience on MDM development and implementation and operations of our Master Data Management (MDM) platforms, with hands-on experience in Informatica IDQ and Informatica MDM. This role will involve hands-on MDM implementation of MDM solutions using IDQ and Informatica MDM. To succeed in this role, the candidate must have strong IDQ and Informatica MDM technical experience. Roles & Responsibilities: Develop and implement MDM solutions using Informatica IDQ and Informatica MDM platforms. Define enterprise-wide MDM architecture, including IDQ, data stewardship, and metadata workflows. Match/Merge and Survivorship strategy and implementation Design and delivery of MDM processes and data integrations using Unix, Python, and SQL. Collaborate with backend data engineering team and frontend custom UI team for strong integrations and a seamless enhanced user experience respectively Coordinate with business and IT stakeholders to align MDM capabilities with organizational goals. Establish data quality metrics and monitor compliance using automated profiling and validation tools. Promote data governance and contribute to enterprise data modeling and approval workflow (DCRs). Ensure data integrity, lineage, and traceability across MDM pipelines and solutions. Basic Qualifications and Experience: Master s degree with 4 - 6 years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 5 - 8 years of experience in Business, Engineering, IT or related field OR Diploma with 10 - 12 years of experience in Business, Engineering, IT or related field Must-Have Skills: Deep knowledge of MDM tools (Informatica MDM) and data quality frameworks (IDQ) from configuring data assets to building end to end data pipelines and integrations for data mastering and orchestrations of ETL pipelines Very good understanding on reference data, hierarchy and its integration with MDM Hands on experience with custom workflows AVOS, Eclipse etc Strong experience with external data enrichment services like Address doctor etc Strong experience on match/merge and survivorship rules strategy and implementations Strong experience with group fields, cross reference data and UUIDs Strong understanding of AWS cloud services and Databricks architecture. Proficiency in Python, SQL, and Unix for data processing and orchestration. Experience with data modeling, governance, and DCR lifecycle management(Avos). Proven leadership and project management in large-scale MDM implementations. Able to implement end to end integrations including API based integrations, Batch integrations and Flat file based integrations Must have worked on atleast 3 end to end implementations of MDM Hands on Unix and Advance sql Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica) Any Data Analysis certification (SQL) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Role GCF: 04A .

Posted 1 week ago

Apply

6.0 - 9.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are looking for an experienced SAP Master Data Governance (MDG) Techno-functional to co-design and drive the implementation of SAP Master Data Governance solutions. In this role, you will architect scalable, innovative systems, provide expert technical guidance, configuration, development and maintenance that align with Amgens strategic objectives. You will collaborate closely with the MDG Product Owner, Technical, and other SAP S/4 Functional and technical architects and other functional MDG teams to implement, enhance and optimize MDG Master data replications and Integrations, ensuring SAP MDG delivers maximum value across the organization. Roles & Responsibilities: Collaborate with business collaborators to understand data governance requirements and translate them into effective MDG solutions. Configure, and implement SAP MDG solutions for MDG -Material or Business Partner or Finance. Provide technical leadership and guidance to development teams, ensuring alignment to best practices and standards. Configure and customize SAP MDG on SAP S/4 Hana accordance with the MDG strategy. Develop and maintain data models, workflows, and business rules within the MDG framework. Collaborate with multi-functional teams to integrate MDG with other SAP modules and external systems. Ensure compliance with data governance policies and standards. Participate in project planning, estimation, and risk assessment. Mentor junior team members and contribute to knowledge sharing. Create comprehensive technical documentation, including design specifications, architecture diagrams, and user guides. Conduct training sessions for key partners and end-users as needed. Follow Agile software development methods to design, build, implement, and deploy. Functional Skills: Must-Have Skills: Experience in atleast 2 SAP MDG Implementation. Experience with atleast 2 of the MDG Data Models and preferably custom data models Functional understanding of SAP Master Data and MDG Out of the box solution. Technical expertise to build and develop workflows, validations, replication, etc. Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical collaborators. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams Basic Qualifications: 6 to 9 years of Business, Engineering, IT or related field experience Expertise in the implementation of SAP MDG solution (configuration, design, build, test and deploy) Deep understanding on key SAP MDG concepts - Data Modeling, UI Modelling, Process Modelling, Governance Process, Mass Processing, DRF, DIF, BRF+ and Consolidation Features + DQM. Experience in configuring rule-based Workflows (serial, parallel and combination) and User interface modelling.

Posted 1 week ago

Apply

3.0 - 4.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

9.0 - 12.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 9 to 12 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

5.0 - 10.0 years

2 - 6 Lacs

Hyderabad

Work from Office

As a Sr. Associate BI Engineer , you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior BI engineers, analysts, and stakeholders to build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment. Roles & Responsibilities: D esigning and maintaining dashboards and reports using tools like Power BI, Tableau, or Cognos. Perform data analysis to identify trends and support business decisions. G ather BI requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. A pply best practices in data modeling , visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming. Basic Qualifications and Experience: Bachelors or Master s degree in Computer Science, IT or related field experience Atleast 5 years of relevant experience. Functional Skills: Exp osure to data visualization tools such as Power BI, Tableau, or Quick S ight . P roficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Good-to-Have Skills: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe ) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Strong verbal and written communication skills Willingness to learn and take initiative Ability to work effectively in a team environment Attention to detail and commitment to quality Ability to manage time and prioritize tasks effectively

Posted 1 week ago

Apply

3.0 - 5.0 years

14 - 16 Lacs

Hyderabad

Work from Office

As a BI Analyst in the Business Intelligence, Reporting, and Sensing team, you will play a critical role in transforming data into actionable insights that drive strategic decisions. You will collaborate with cross-functional teams to gather requirements, design analytical solutions, and deliver high-quality dashboards and reports. This role blends technical expertise with business acumen and requires strong communication and problem-solving skills . Roles & Responsibilities: Collaborate with System Architects and Product Managers to manage business analysis activities, ensuring alignment with engineering and product goals . Support Design, develop ment , and maintena nce activities of interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos). Analyze datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Collaborate with stakeholders across departments to understand data and reporting needs. Translate business requirements into technical specifications and analytical solutions. Work with Data Engineers to ensure data models and pipelines support accurate and reliable reporting. Contribute to data quality and governance initiatives. Document business processes, use cases, and test plans to support development and QA efforts. Participate in Agile ceremonies and contribute to backlog refinement and sprint planning. Basic Qualifications and Experience: Bachelors or Master s degree in Computer Science, IT or related field experience Atleast 5 years of experience as Business Analyst or relevant areas. Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Experience with data visualization tools such as Power BI, Tableau, or Quick S ight . Proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Experience writing user stories and acceptance criteria in Agile tools like JIRA Strong analytical and problem-solving skills Good-to-Have Skills: Experience with AWS services (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe ) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Developer certification (preferred) SAFe for Teams Certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills

Posted 1 week ago

Apply

15.0 - 16.0 years

22 - 30 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled and experienced Principal AI Solution Architect to join our dynamic team. The candidate will lead the AI Solutioning and Designing across Enterprise Teams and cross-functional teams. They will primarily be working with the MDM CoE to lead and drive AI solutions and optimizations and also provide thought leadership. The role involves developing and implementing AI strategies, collaborating with cross-functional teams, and ensuring the scalability, reliability, and performance of AI solutions. To succeed in this role, the candidate must have strong AI/ML, Data Science , GenAI experience along with MDM knowledg . Candidate must have AI/ML, data science and GenAI experience on technologies like ( PySpark / PyTorch , TensorFlow, LLM , Autogen , Hugging FaceVectorDB , Embeddings, RAGs etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Lead the Designing, solutioning and development of enterprise-level GenAI applications using LLM frameworks such as Langchain , Autogen , and Hugging Face. Architect intelligent pipelines using PySpark , TensorFlow, and PyTorch within Databricks and AWS environments. Implement embedding models and manage VectorStores for retrieval-augmented generation (RAG) solutions. Integrate and leverage MDM platforms like Informatica and Reltio to supply high-quality structured data to ML systems. Utilize SQL and Python for data engineering, data wrangling, and pipeline automation. Build scalable APIs and services to serve GenAI models in production. Lead cross-functional collaboration with data scientists, engineers, and product teams to scope, design, and deploy AI-powered systems. Ensure model governance, version control, and auditability aligned with regulatory and compliance expectations. Basic Qualifications and Experience: Master s degree with 11 - 1 4 years of experience in Data Science, Artificial Intelligence, Computer Science, or related fields OR Bachelor s degree with 1 5 - 16 years of experience in Data Science, Artificial Intelligence, Computer Science, or related fields OR Diploma with 1 7 - 1 8 years of hands-on experience in Data Science, AI/ML technologies, or related technical domains Functional Skills: Must-Have Skills: 1 4 + years of experience working in AI/ML or Data Science roles, including designing and implementing GenAI solutions. Extensive hands-on experience with LLM frameworks and tools such as Langchain , Autogen , Hugging Face, OpenAI APIs, and embedding models. Expertise in AI/ML solution architecture and design , knowledge of industry best practices Experience desining GenAI based solutions using Databricks platform Hands-on experience with Python, PySpark , PyTorch , LLMs, Vector DB, Embeddings, SciKit , Langchain , SK-learn, Tensorflow , APIs, Autogen , VectorStores , MongoDB, DataBricks , Django Strong knowledge of AWS and cloud-based AI infrastructure Excellent problem-solving skills Strong communication and leadership skills Ability to collaborate effectively with cross-functional teams and stakeholders Experience in managing and mentoring junior team members Must be able to p rovide thought leadership to the junior team members Good-to-Have Skills: Prior experience in Data Modeling, ETL development, and data profiling to support AI/ML workflows. Working knowledge of Life Sciences or Pharma industry standards and regulatory considerations. Proficiency in tools like JIRA and Confluence for Agile delivery and project collaboration. Familiarity with MongoDB, VectorStores , and modern architecture principles for scalable GenAI applications. Professional Certifications : Any Data Analysis certification (SQL , Python, Other DBs or Programming languages ) Any cloud certification (AWS or AZURE) Data Science and ML Certification s Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams

Posted 1 week ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are looking for an experienced SAP Master Data Governance (MDG) Techno-functional to co-design and drive the implementation of SAP Master Data Governance solutions. In this role, you will architect scalable, innovative systems, provide expert technical guidance, configuration, development and maintenance that align with Amgens strategic objectives. You will collaborate closely with the MDG Product Owner, Technical, and other SAP S/4 Functional and technical architects and other functional MDG teams to implement, enhance and optimize MDG Master data replications and Integrations, ensuring SAP MDG delivers maximum value across the organization. Roles & Responsibilities: Collaborate with business collaborators to understand data governance requirements and translate them into effective MDG solutions. Configure, and implement SAP MDG solutions for MDG -Material or Business Partner or Finance. Provide technical leadership and guidance to development teams, ensuring alignment to best practices and standards. Configure and customize SAP MDG on SAP S/4 Hana accordance with the MDG strategy. Develop and maintain data models, workflows, and business rules within the MDG framework. Collaborate with multi-functional teams to integrate MDG with other SAP modules and external systems. Ensure compliance with data governance policies and standards. Participate in project planning, estimation, and risk assessment. Mentor junior team members and contribute to knowledge sharing. Create comprehensive technical documentation, including design specifications, architecture diagrams, and user guides. Conduct training sessions for key partners and end-users as needed. Follow Agile software development methods to design, build, implement, and deploy. Functional Skills: Must-Have Skills: Experience in atleast 2 SAP MDG Implementation. Experience with atleast 2 of the MDG Data Models and preferably custom data models Functional understanding of SAP Master Data and MDG Out of the box solution. Technical expertise to build and develop workflows, validations, replication, etc. Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical collaborators. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams Basic Qualifications: 8 to 13 years of Business, Engineering, IT or related field experience Expertise in the implementation of SAP MDG solution (configuration, design, build, test and deploy) Deep understanding on key SAP MDG concepts - Data Modeling, UI Modelling, Process Modelling, Governance Process, Mass Processing, DRF, DIF, BRF+ and Consolidation Features + DQM. Experience in configuring rule-based Workflows (serial, parallel and combination) and User interface modelling.

Posted 1 week ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are seeking an experienced MDM Engineer with 8 12 years of experience to lead development and operations of our Master Data Management (MDM) platforms, with hands-on experience in data engineering experience. This role will involve handling the backend data engineering solution within MDM team. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience. Candidate must have experience on technologies like (SQL, Python, PySpark, Databricks, AWS, API Integrations etc). Roles & Responsibilities: Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM logic Implement match/merge algorithms and survivorship rules using Informatica MDM or Reltio APIs Build and maintain Delta Lake tables with schema evolution and versioning for master data domains Use AWS services like S3, Glue, Lambda, and Step Functions for orchestrating MDM workflows Automate data quality checks using IDQ or custom PySpark validators with rule-based profiling Integrate external enrichment sources (e.g., D&B, LexisNexis) via REST APIs and batch pipelines Design and deploy CI/CD pipelines using GitHub Actions or Jenkins for Databricks notebooks and jobs Monitor pipeline health using Databricks Jobs API, CloudWatch, and custom logging frameworks Implement fine-grained access control using Unity Catalog and attribute-based policies for MDM datasets Use MLflow for tracking model-based entity resolution experiments if ML-based matching is applied Collaborate with data stewards to expose curated MDM views via REST endpoints or Delta Sharing Basic Qualifications and Experience: 8 to 13 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced proficiency in PySpark for distributed data processing and transformation Strong SQL skills for complex data modeling, cleansing, and aggregation logic Hands-on experience with Databricks including Delta Lake, notebooks, and job orchestration Deep understanding of MDM concepts including match/merge, survivorship, and golden record creation Experience with MDM platforms like Informatica MDM or Reltio, including REST API integration Proficiency in AWS services such as S3, Glue, Lambda, Step Functions, and IAM Familiarity with data quality frameworks and tools like Informatica IDQ or custom rule engines Experience building CI/CD pipelines for data workflows using GitHub Actions, Jenkins, or similar Knowledge of schema evolution, versioning, and metadata management in data lakes Ability to implement lineage and observability using Unity Catalog or third-party tools Comfort with Unix shell scripting or Python for orchestration and automation Hands on experience on RESTful APIs for ingesting external data sources and enrichment feeds Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica, Reltio etc) Any Data Analysis certification (SQL, Python, PySpark, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies