Jobs
Interviews

8586 Data Modeling Jobs - Page 45

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

14 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining , analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications : Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Must have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark ( PySpark , SparkSQL ), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools ( eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Minimum Experience of 6-9 years Must have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

1.0 - 3.0 years

5 - 9 Lacs

Hyderabad

Work from Office

As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What we expect of you Basic Qualifications: Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way.

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Let s do this. We are seeking an experienced Senior BI Engineer to lead the design, development, and optimization of scalable business intelligence (BI) solutions that empower data-driven decision-making across the organization. The ideal candidate is highly skilled in data modeling, dashboard development, ETL design, and cloud-based BI platforms, with a passion for turning complex data into clear, actionable insights. As a senior member of the BI team, you will work closely with data engineers, analysts, business stakeholders, and product teams to deliver robust, user-friendly analytics solutions that support strategic and operational goals. Roles & Responsibilities: Design, develop, and maintain enterprise-grade BI dashboards and reports using tools like Power BI, Tableau, or Looker. Build and optimize semantic models, tabular data structures, and reusable datasets for self-service BI users. Partner with business stakeholders to translate requirements into technical solutions, delivering accurate, relevant, and timely insights. Work closely with data engineering teams to integrate BI solutions into data lake, warehouse, or lakehouse architectures (e.g., Snowflake, Redshift, Databricks, BigQuery). Implement best practices for BI development, including version control, performance optimization, and data governance. Ensure BI solutions are secure, scalable, and aligned with enterprise data governance standards. Mentor junior BI developers and analysts, setting standards for dashboard usability, data visualization, and design consistency. Collaborate with cross-functional teams to promote self-service BI adoption and data literacy throughout the organization. Monitor BI performance, usage, and adoption, providing continuous improvements and training to enhance impact. Must-Have Skills: 5 - 8 years of experience in BI development and data visualization, with deep expertise in tools such as Power BI, Tableau, or Looker. Strong knowledge of SQL, data modeling techniques, and BI architecture best practices. Experience working with data warehouses and cloud data platforms Proficiency in building dashboards, KPIs, and executive-level reporting that align with business priorities. Solid understanding of ETL/ELT processes, data pipelines, and integration with BI tools. Strong collaboration skills with the ability to work effectively across engineering, product, finance, and business teams. Excellent communication skills, with a proven ability to translate technical concepts into business value. Good-to-Have Skills: Experience in cloud platforms (AWS, Azure, or GCP) and modern data stack environments. Familiarity with data governance, data cataloging, metadata management, and access control. Exposure to Agile methodologies, CI/CD for BI, and DevOps practices. BI or data certifications (e.g., Microsoft Certified: Power BI Data Analyst, Tableau Certified Professional). Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience PowerBI / Tableau certifications preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT .

Posted 1 week ago

Apply

1.0 - 3.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role to support the stability, performance, and alignment of Quality-focused data & analytics applications. The role requires close collaboration with business stakeholders, active monitoring and troubleshooting, and supporting enhancement and testing cycles across critical applications. You will also be responsible for developing and maintaining software solutions that meet business needs and ensuring the availability and performance of critical systems and applications. Roles & Responsibilities: Maintain existing code and configuration: Support and maintain data & analytics products Development & Deployment: Develop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following standard methodologies for code structure and efficiency. Documentation: Generate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative Design: Work closely with team members and stakeholders to understand project requirements and translate them into functional technical designs. Code Reviews & Quality Assurance: Participate in peer code reviews, providing feedback on consistency to best practices, and ensuring high code quality and maintainability. Testing & Debugging: Assist in writing unit and integration tests to validate new features and functionalities. Support troubleshooting and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous Learning: Stay up-to-date with the newest technologies and best practices, with a focus on expanding knowledge in cloud services, automation, and secure software development. Must-Have Skills Solid technical background, including understanding software development processes, databases, and cloud-based systems Ability to triage and resolve incidents, bring up when necessary, and maintain SLAs Strong foundational knowledge of testing methodologies Experience working with databases, data modeling and data warehousing (Oracle, MySQL) Ability to understand and map business requirements to system capabilities Comfortable engaging with global stakeholders, communicating both technical and non-technical issues Good-to-Have Skills: Understanding of Quality Control and Quality Assurance processes within the biopharmaceutical industry. Curiosity of modern technology domain and learning agility Experience with the following technologies: AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL), Testing tools (i.e. Application Lifecycle Management), and ITIL platforms (i.e. ServiceNow) Professional Certifications AWS Cloud Practitioner (Preferred) Soft Skills: Excellent analytical and problem-solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

2.0 - 6.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will support the stability, performance, and alignment of Quality-focused data & analytics applications. The role requires close collaboration with business stakeholders, active monitoring and troubleshooting, and supporting enhancement and testing cycles across critical applications. You will also be responsible for developing and maintaining software solutions that meet business needs and ensuring the availability and performance of critical systems and applications. Roles & Responsibilities: Maintain existing code and configuration: Support and maintain data & analytics products Development & Deployment: Develop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following best practices for code structure and efficiency. Documentation: Generate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative Design: Work closely with team members and stakeholders to understand project requirements and translate them into functional technical designs. Code Reviews & Quality Assurance: Participate in peer code reviews, providing feedback on adherence to best practices, and ensuring high code quality and maintainability. Testing & Debugging: Assist in writing unit and integration tests to validate new features and functionalities. Support troubleshooting and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous Learning: Stay up-to-date with the newest technologies and best practices, with a focus on expanding knowledge in cloud services, automation, and secure software development. Must-Have Skills Solid technical background, including understanding software development processes, databases, and cloud-based systems Ability to triage and resolve incidents, intensify when necessary, and maintain SLAs Strong foundational knowledge of testing methodologies Experience working with databases, data modeling and data warehousing (Oracle, MySQL) Ability to understand and map business requirements to system capabilities Comfortable engaging with global collaborators, communicating both technical and non-technical issues Good-to-Have Skills: Understanding of Quality Control and Quality Assurance processes within the biopharmaceutical industry. Curiosity of modern technology domain and learning agility Experience with the following technologies: AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL), Testing tools (i.e. Application Lifecycle Management), and ITIL platforms (i.e. ServiceNow) Professional Certifications AWS Cloud Practitioner (Preferred) Soft Skills: Excellent analytical and problem-solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelor s degree and 2 to 6 year What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

0.0 - 4.0 years

2 - 4 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will support the stability, performance, and alignment of Quality-focused data & analytics applications. The role requires close collaboration with business collaborators, active monitoring and troubleshooting, and supporting enhancement and testing cycles across critical applications. You will also be responsible for developing and maintaining software solutions that meet business needs and ensuring the availability and performance of critical systems and applications. Roles & Responsibilities: Maintain existing code and configuration: Support and maintain data & analytics products Development & Deployment: Develop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following standard methodologies for code structure and efficiency. Documentation: Generate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative Design: Work closely with team members and collaborators to understand project requirements and translate them into functional technical designs. Code Reviews & Quality Assurance: Participate in peer code reviews, providing feedback on consistency to standard methodologies, and ensuring high code quality and maintainability. Testing & Debugging: Assist in writing unit and integration tests to validate new features and functionalities. Support fix and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous Learning: Stay up-to-date with the newest technologies and standard methodologies, with a focus on expanding knowledge in cloud services, automation, and secure software development. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s / Masters degree with 0 to 4 years of experience in software engineering. Preferred Qualifications: Solid technical background, including understanding software development processes, databases, and cloud-based systems. Ability to triage and resolve incidents, bring up when necessary, and maintain SLAs. Strong foundational knowledge of testing methodologies. Experience working with databases, data modeling and data warehousing (Oracle, MySQL). Ability to understand and map business requirements to system capabilities. Comfortable engaging with global partners, communicating both technical and non-technical issues. Good-to-Have Skills: Understanding of Quality Control and Quality Assurance processes within the biopharmaceutical industry. Curiosity of modern technology domain and learning agility. Experience with the following technologies: AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL), Testing tools (i.e. Application Lifecycle Management), and ITIL platforms (i.e. ServiceNow). Professional Certifications AWS Cloud Practitioner (Preferred) Soft Skills: Excellent analytical and problem-solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

0.0 - 3.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates strong technical skills, has experience with big data technologies, and understands data architecture and ETL processes Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: 1+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

2.0 - 6.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates strong technical skills, has experience with big data technologies, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelor s degree with 2 to 6 years of Computer Science, IT or related field experience Preferred Qualifications: 1+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Strong learning agility, ability to pick up new technologies used to support early drug discovery data analysis needs Collaborative with good communication skills. High degree of initiative and self-motivation. Ability to handle multiple priorities successfully. Team-oriented with a focus on achieving team goals. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

1.0 - 3.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be part of Research s Semantic Graph. Team is seeking a dedicated and skilled Data Engineer to design, build and maintain solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance, graph-based, data engineering solutions for large scientific datasets and collaborate with Research partners. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, has experience with semantic data modeling and graph databases, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelor s degree and 1to 3 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience OR Diploma and 4 to 7 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Hands on experience with big data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

0.0 - 3.0 years

13 - 15 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you areresponsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 0 to 3 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores. Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good-to-Have Skills: Experience with data modeling, performance tuning on relational and graph databases ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Business Advisors shape the vision and strategy with the client, understand the needs of the users/stakeholders, carry out an elicitation of processes, data and capabilities and derive the target processes and the business requirements for the current and future solution. Job Description - Grade Specific Defines the methods and the business analysis framework for the business analysis work to be carried out in their project/program together with the client.Additionally performs requirements elicitation and modelling. Performs leadership activities within the project and beyond. Skills (competencies) Abstract Thinking Active Listening Agile (Software Development Framework) Analytical Thinking Backlog Grooming Business Architecture Modeling Business Process Modeling (e.g. BPMN) Change Management Coaching Collaboration Commercial Acumen Conceptual Data Modeling Conflict Management Confluence Critical Thinking CxO Conversations Data Analysis Data Requirements Management Decision-Making Emotional Intelligence Enterprise Architecture Modelling Facilitation Functional IT Architecture Modelling Giving Feedback Google Cloud Platform (GCP) (Cloud Platform) Influencing Innovation Jira Mediation Mentoring Microsoft Office Motivation Negotiation Networking Power BI Presentation skills Prioritization Problem Solving Project Governance Project Management Project Planning Qlik Relationship-Building Requirements Gathering Risk Management Scope Management SQL Stakeholder Management Story Mapping Storytelling Strategic Management Strategic tThinking SWOT Analysis Systems Requirement Analysis (or Management) Tableau Trusted Advisor UI-Design / Wireframing UML User Journey User Research Verbal Communication Written Communication

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Summary Responsible for the detailed Design, Development, and Delivery of system solutions such as Reporting, Analytical and Gen AI within a specific business or technology area. This role requires alignment with the defined solution architecture, leveraging existing patterns, and ensuring compliance with both business and technical requirements. About the Role Role Title: Assoc. Dir. DDIT DEV Data Analytics DS&AI Location : Hyderabad Hyd-India# LI Hybrid Role Purpose: Create the detailed DDIT solution/service design, based on functional specifications to meet quality and performance requirements and technical constraints. Responsible for detailed design, development, code review and delivery of Analytical and Gen AI solutions Your responsibilities include but are not limited to Responsible for the detailed design, development, and delivery of system solutions within a specific business or technology area. This role requires alignment with the defined solution architecture, leveraging existing patterns, and ensuring compliance with both business and technical requirements. Develop solution architectures that align with enterprise standards and meet functional and non-functional requirements, and the solution could be Reporting, Web application and Gen AI. Leverage reference architectures, patterns, and standards to ensure consistency and scalability. Take accountability for technical delivery of projects /use cases for a specific business/technology area and ensure adherence with Security and Compliance policies and procedures within Service Delivery scope Collaborate and lead with diverse groups of work colleagues (data engineering, data science, platform team and business stakeholders) and positively manage ambiguity Apply best practices in design and continuously improve upon intuitive user experience for business stakeholders. Ensure the overall user experience is considered when designing new solutions and services Individual contributor or leading teams. Engaging with multiple stakeholders (architecture / infrastructure / vendor partners) medium to large-sized complexity of projects What you ll bring to the role: Should have a background in programming and solution design. Exposure to a wide range of technologies is preferred in Reporting, Web applications and Gen AI domain . Experience in project management and solution/service delivery Strong analytical and conceptual skills for designing and implementing IT solutions that meet business needs. Hands-on experience in Cloud Platforms like AWS ( Amazon Web Services ), Amazon S3, Amazon RDS, Databricks, AWS Glue Extensive hands-on experience with Power BI or Qlik Sense/ Spotfire Experience in web technologies and React JS Working experience and knowledge in ETL tools ( Databricks , Spark, Kafka, Dataiku etc. ), data modeling Experience working with Database technologies ( Oracle, Snowflake, etc. ) & data processing languages (SQL, Python, R , etc. ) proficiency in Generative AI, large language models (LLMs), multimodal AI, and deep learning for pharma applications. Excellent communication and stakeholder management skills. Experience in highly regulated environments, ideally in the Pharma industry, including on Computer System Validation with good documentation practice (GxP) Experience working in Agile Scrum teams Exposure to fine-tuning the LLM models will be a big plus Desirable Requirements: Education & Qualifications bachelor s degree in computer science, Computer Engineering or related technical discipline or equivalent experience demonstrated 12+ years of experience with expert understanding and proven track record in analyzing business processes, architecting, designing, developing and integrating complex, cross-divisional end to end analytical solutions with large data volumes. Hands on to ETL tools like Databricks, SQL, Database technologies, AWS technologies (Primary Skill - Strong), Power BI technologies (Primary Skill- Good), web technologies and React JS (Primary Skill- Strong), Generative AI, large language models (LLMs), multimodal AI, and deep learning (Secondary Skill or Good to have), Architect for Reporting, Analytical and Web applications(Primary Skill), Python or R (Secondary Skill or Good to have), GxP compliance (Good to have), Pharma domain knowledge (Good to have) Commitment to Diversity & Inclusion: Novartis embraces diversity, equal opportunity, and inclusion. We are committed to building diverse teams, representative of the patients and communities we serve, and we strive to create an inclusive workplace that cultivates bold innovation through collaboration and empowers our people to unleash their full potential. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients lives. Ready to create a brighter future together? https://www. novartis. com / about / strategy / people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork. novartis. com/network Benefits and Rewards: Read our handbook to learn about all the ways we ll help you thrive personally and professionally:

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Gurugram

Work from Office

Overview We are seeking a self-driven Senior Tableau Engineer with deep expertise in data modeling, visualization design, and BI-tool migrations. Youll own end-to-end dashboard development, translate complex healthcare and enterprise data into actionable insights, and lead migrations from legacy BI platforms (e.g., MicroStrategy, BusinessObjects) to Tableau. Job Location - Delhi NCR / Bangalore /Pune Key Responsibilities Data Modeling & Architecture Design and maintain logical and physical data models optimized for Tableau performance. Collaborate with data engineers to define star/snowflake schemas, data marts, and semantic layers. Ensure data integrity, governance, and lineage across multiple source systems. Visualization Development Develop high-impact, interactive Tableau dashboards and visualizations for executive-level stakeholders. Apply design best practices: color theory, UX principles, and accessibility standards. Optimize workbooks for performance (efficient calculations, extracts, and queries) BI Migration & Modernization Lead migration projects from MicroStrategy, BusinessObjects, or other BI tools to Tableau. Reproduce and enhance legacy reports in Tableau, ensuring feature parity and improved UX. Validate data accuracy post-migration through sampling, reconciliation, and automated testing. Automation & Deployment Automate data extract refreshes, alerting, and workbook publishing via Tableau Server/Online. Implement CI/CD processes for Tableau content using Git, Tableau APIs, and automated testing frameworks. Establish standardized naming conventions, folder structures, and content lifecycle policies. Collaboration & Mentorship Partner with analytics translators, data engineers, and business owners to gather requirements and iterate solutions. Mentor junior BI developers on Tableau best practices, performance tuning, and dashboard design. Evangelize self-service BI adoption: train users, develop documentation, and host office hours. Governance & Quality Define and enforce Tableau governance: security, permissions, version control, and change management. Implement data quality checks and monitoring for dashboards (row counts, anomalies, thresholds). Track and report key metrics on dashboard usage, performance, and user satisfaction.

Posted 1 week ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Gurugram, Bengaluru

Work from Office

This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders; Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You will Design and build resilient and efficient data pipelines for batch and real-time streaming Architect and design data infrastructure on cloud using Infrastructure-as-Code tools. Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools. Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches. Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations. Lead a team of engineers to deliver impactful results at scale. Execute projects with an Agile mindset. Build software frameworks to solve data problems at scale. Technical Requirements: 7+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus. 3+ years' experience architecting solutions for developing data pipelines from structured, unstructured sources for batch and realtime workloads. Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required. Strong programming / scripting experience using SQL and python and Spark. Strong Data Modeling, Data lakehouse concepts. Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket. Experience with Agile development methods in data-oriented projects Other Requirements: Highly motivated self-starter and team player and demonstrated success in prior roles. Track record of success working through technical challenges within enterprise organizations Ability to prioritize deals, training, and initiatives through highly effective time management Excellent problem solving, analytical, presentation, and whiteboarding skills Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations Certifications on Azure Data Engineering and related technologies.

Posted 1 week ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

We are looking for an enthusiastic and driven Python Developer to join our AI engineering team and work on building modern AI web applications. You will be developing backend and frontend components using leading Python frameworks such as Django, Fast API, or similar. If you are passionate about learning new technologies, building scalable applications, and growing your skills in a fast-paced environment, we d love to hear from you. Key Responsibilities Web Application Development: Design, develop, test, and deploy web applications using popular Python frameworks (Django, Fast API, Reflex, etc.). API Design & Development: Build and maintain RESTful APIs, with a focus on implementing robust CRUD operations. Backend Engineering: Work on server-side logic, data modeling, and integration with a variety of databases (relational, NoSQL, or vector databases). Frontend Collaboration: Participate in UI/UX development using Python-based web frameworks and collaborate closely with designers or frontend engineers. Code Quality: Write clean, maintainable, and well-documented code. Engage in code reviews and ensure best practices. Problem Solving: Analyze requirements, propose effective solutions, and assist with troubleshooting and optimizing application performance. Requirements Experience: 0 to 6 months of practical experience in Python development (internships, academic projects, freelance, or work experience are all valued). Core Skills: Proficiency in Python programming. Practical experience with at least one web framework (such as Django, Fast API, Flask, or similar). Experience building and consuming REST APIs and implementing CRUD operations. Bonus Skills (Nice to Have): Exposure to additional frameworks or libraries for web app development. Understanding of frontend basics (HTML, CSS, JS) is a plus. Experience with databases: Relational (PostgreSQL, MySQL, etc.) NoSQL (MongoDB, Redis, DynamoDB, etc.) Vector databases (Pinecone, Weaviate, Qdrant, Milvus, etc.) is an added advantage. Familiarity with Git or other version control tools. Soft Skills: Quick learner and proactive in exploring new technologies. Analytical and problem-solving mindset. Good written and verbal communication. Team player and collaborative attitude. Educational Qualifications Bachelor s degree in computer science, Information Technology, or related field, OR equivalent hands-on experience (projects, bootcamps, etc.).

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Pune

Work from Office

Job Description: This role is for a motivated and curious Data Architect / Data Engineer to join the Group Architecture team. This is a hands-on role focused on the development of tools, prototypes, and reference solutions that support enterprise data architecture standards. The successful candidate will work with senior architects and engineers to enable the adoption of best practices across data platforms, pipelines, and domains, helping to ensure scalable, secure, and consistent data delivery across the organization. Group Architecture is responsible for setting the strategic direction for technology architecture across the enterprise. The team partners with all business divisions to define architecture principles and standards, evaluate emerging technologies, and guide implementation through hands-on support, tooling, and governance. Responsibilities Design and develop lightweight tools, scripts, and utilities that support the implementation and adoption of data architecture standards (e.g., metadata enrichment, model validation, lineage capture, standard compliance checks). Contribute to the development of reference implementations and prototypes demonstrating approved data architecture patterns. Support the creation and enhancement of data pipelines, APIs, and other data integration components across various platforms. Assist in the evaluation and testing of new tools, frameworks, or services for potential use in the data architecture landscape. Collaborate with senior architects, engineers, and business stakeholders to gather requirements and deliver technical solutions that meet enterprise standards. Prepare and maintain documentation, dashboards, and visual materials to communicate technical concepts and track adoption of architecture standards. Participate in architecture review forums and support data governance processes as needed. Skills Foundational experience in data engineering or software development, with the ability to write clean, maintainable code in Python, SQL, or other languages. Exposure to cloud platforms (such as GCP, AWS, or Azure) and experience with relevant data services and APIs. Interest in or experience developing internal tools or automation scripts to improve engineering workflows. Familiarity with concepts such as data lineage, metadata, data quality, or governance is a plus. Understanding of basic architecture principles and willingness to apply them in practical solution design. Ability to work collaboratively in a cross-functional team, take initiative, and communicate effectively with technical and non-technical stakeholders. Exposure to business intelligence tools like Looker, Tableau, or similar. Understanding of data modeling, even at a high level, is beneficial but not a core focus. Experience with Git, CI/CD, or cloud-native development practices. Well-being & Benefits Emotionally and mentally balanced: we support you in dealing with life crises, maintaining stability through illness, and maintaining good mental health Empowering managers who value your ideas and decisions. Show your positive attitude, determination, and open-mindedness. A professional, passionate, and fun workplace with flexible Work from Home options. A modern office with fun and relaxing areas to boost creativity. Continuous learning culture with coaching and support from team experts. Physically thriving we support you managing your physical health by taking appropriate preventive measures and providing a workplace that helps you thrive Private healthcare and life insurance with premium benefits for you and discounts for your loved ones. Socially connected: we strongly believe in collaboration, inclusion and feeling connected to open up new perspectives and strengthen our self-confidence and wellbeing. Kids@TheOffice - support for unexpected events requiring you to care for your kids during work hours. Enjoy retailer discounts, cultural and CSR activities, employee sport clubs, workshops, and more. Financially secure: : we support you to meet personal financial goals during your active career and for the future Competitive income, performance-based promotions, and a sense of purpose. 24 days holiday, loyalty days, and bank holidays (including weekdays for weekend bank holidays).

Posted 1 week ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Gurugram

Work from Office

Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.

Posted 1 week ago

Apply

9.0 - 14.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Role & responsibilities Provide Snowflake Technical leadership for the enterprise reporting and analytics team Lead the development and optimization of data models and SQL scripts within Snowflake to support analytical and reporting needs Create and maintain data pipelines for ingestion, transformation and validation. Create and configure alerts and task monitoring within Snowflake related to job status, task failure / successes, security events, data quality, etc. Understand the current state reporting and automation solution, identify & work on improvement areas Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data reporting needs. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Assemble large, complex data sets that meet functional / non-functional business requirements. Utilizing knowledge of ETL and Data modeling drive and execute systems with a variety of large structured and unstructured sources. Work with data and drive analytics. Drive all aspects of the Digital transformation, Data management, Reporting and Data Science capabilities through PoCs, Projects to introduce new capabilities. Engage with external vendors / internal teams on all aspects of project execution. Build the infrastructure required for ELT from a wide variety of data sources. Responsible for system administration, design, architecture, and continuous improvement across the Snowflake platform Preferred candidate profile Bachelors degree in Computer Science, Business, Math, Statistics, Engineering or related field or equivalent is required. 10 + years of Business & Technical experience utilizing EDW tools like Snowflake, ETL tools - Alteryx and Data Visualization Tools like Tableau Strong technical experience working directly in Snowflake. Proficiency in SQL, Python and cloud-based technologies Strong experience in working directly with business users to identify, define, analyze, test and implement reporting needs and platforms Experience with and in developing data visualizations through tools such as Tableau, SAC, PowerBI to deliver thought-provoking analytical information to the business Sound knowledge of ETL, Data modeling, Statistical and Data Science concepts Experience in handling structured and unstructured data Preferred Qualifications: Master’s or other advanced degree in Computer Science, Business, Math, Statistics, Engineering or related field preferred. SnowPro Core Certification or other Data Analytics / Data Engineering Certifications Manufacturing or business experience with a solid understanding of business operations /processes Working knowledge of systems like SAP, SAP BW, Salesforce, etc. Exposure to cloud technologies: Azure (Preferred)/AWS/GCP, etc Experience in tools ETL like Alteryx/Informatica or similar environments Knowledge of SAP ECC 6.0 Modules (SD, MM, PP , FI/CO, QM, PM)

Posted 1 week ago

Apply

1.0 - 5.0 years

10 - 11 Lacs

Jaipur

Work from Office

Data Engineer + AIJob Summary:We are looking for a skilled and versatile Data Engineer with expertise in PySpark, Apache Spark, and Databricks, along with experience in analytics, data modeling, and Generative AI/Agentic AI solutions This role is ideal for someone who thrives at the intersection of data engineering, AI systems, and business insights contributing to high-impact programs with clients Required Skills & Experience: Advanced proficiency in PySpark, Apache Spark, and Databricks for batch and streaming data pipelines Strong experience with SQL for data analysis, transformation, and modeling Expertise in data visualization and dashboarding tools (Power BI, Tableau, Looker) Solid understanding of data warehouse design, relational databases (PostgreSQL, Snowflake, SQL Server), and data lakehouse architectures Exposure to Generative AI, RAG, embedding models, and vector databases (e g , FAISS, Pinecone, ChromaDB) Experience with Agentic AI frameworks: LangChain, Haystack, CrewAI, or similar Familiarity with cloud services for data and AI (Azure, AWS, or GCP) Excellent problem-solving and collaboration skills with an ability to bridge engineering and business needs Preferred Skills: Experience with MLflow, Delta Live Tables, or other Databricks-native AI tools Understanding of prompt engineering, LLM deployment, and multi-agent orchestration Knowledge of CI/CD, Git, Docker, and DevOps pipelines Awareness of Responsible AI, data privacy regulations, and enterprise data compliance Background in consulting, enterprise analytics, or AI/ML product development Key Responsibilities: Design, build, and optimize distributed data pipelines using PySpark, Apache Spark, and Databricks to support both analytics and AI workloads Support RAG pipelines, embedding generation, and data pre-processing for LLM applications Create and maintain interactive dashboards and BI reports using Power BI, Tableau, or Looker for business stakeholders and consultants Conduct adhoc data analysis to drive data-driven decision making and enable rapid insight generation Develop and maintain robust data warehouse schemas, star/snowflake models, and support data lake architecture Integrate with and support LLM agent frameworks such as LangChain, LlamaIndex, Haystack, or CrewAIfor intelligent workflow automation Ensure data pipeline monitoring, cost optimization, and scalability in cloud environments (Azure/AWS/GCP) Collaborate with cross-functional teams including AI scientists, analysts, and business teams to drive use-case delivery Maintain strong data governance, lineage, and metadata management practices using tools like Azure Purview or DataHub

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Bengaluru

Work from Office

JD for Business Analyst with SQL Role name: Business Analyst Role Description: Associate will be responsible for analysis and requirement gathering with customerAnalyze various Databases, Data sources and come up with Data Models and data mapping Work with business SMEs, Product owners and Solution Architect to understand the requirements and author business requirement documents with detailed design, flows, use cases and solution approach including data model and data mapping as and when needed Work in an agile and flexible mannerGood communication and BA experience in Insurance domain Competencies: Business Analysis, ORACLE SQL Experience (Years): 6-8 Essential Skills: Should have on requirement analysis, Design and requirements gathering with regular interaction with customers Should be good in SQL to analyze various Databases, Data sources and come up with Data Models Should be able to do data mapping based on data analysis Should be able to discuss in depth with business and technical teams and create Business Requirement Documents having use cases and detailed solution approach Should have functional/ domain experience in Insurance Desirable Skills: 1) Preferable- Some experience in Data Migration2) Insurance domain related certifications3) Exp in Acturial and PowerBI is plus

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Ahmedabad

Work from Office

Responsibilities- Design and develop machine learning algorithms and deep learning applications and systems Solve complex problems with multilayered data sets, and optimize existing machine learning libraries and frameworks Ensure algorithms generate accurate user recommendations Stay up to date with developments in the machine learning industry Skills Required Impeccable analytical and problem-solving skills Extensive math and computer skills, with a deep understanding of probability, statistics, and algorithms In-depth knowledge of machine learning frameworks, like Keras or PyTorch Familiarity with data structures, data modeling, and software architecture Excellent time management and organizational skills Desire to learn Excellent communication and collaboration skills Innovative mind with a passion for continuous learning General knowledge of building machine learning systems

Posted 1 week ago

Apply

4.0 - 6.0 years

7 - 8 Lacs

Bengaluru

Work from Office

Key Responsibilities : Dashboard & Report Development : Design, develop, and deploy visually compelling dashboards and reports in Tableau that provide actionable insights for different business functions. Work closely with business teams to gather requirements and ensure that the visualizations meet the needs of end-users. Design and implement interactive features like filters, drill-downs, and parameters to enhance user experience. Data Integration : Work with various data sources such as SQL databases, flat files, Excel, cloud data platforms (e.g., AWS, Google BigQuery), and other third-party APIs. Perform data extraction, transformation, and loading (ETL) tasks to integrate data into Tableau. Collaborate with data engineers to optimize and prepare datasets for reporting purposes. Data Modeling & Analysis : Develop and manage data models within Tableau to create a single source of truth for visual reporting. Analyze complex datasets and provide actionable insights based on data patterns, trends, and metrics. Use calculated fields, table calculations, and Level of Detail (LOD) expressions to perform advanced data analysis.

Posted 1 week ago

Apply

5.0 - 8.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive datadriven decisionmaking. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . s Proficient in programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Proficient in Statistical modelling and Machine techniques such as time series forecasting, Reliability models, Markov Models, Stochastic models, Bayesian Modelling, Classification Models, Cluster Analysis, Neural Network, etc. Good exposure to deep learning and associated frameworks (PyTorch, TensorFlow, and Keras). Ability to perform preprocessing of structured and unstructured data Processing, cleansing, and validating the integrity of data to be used for analysis. Strong working experience with cloud platforms build, train, and deploy ML Models on Azure/AWS/GCP. Good working knowledge in distributed computing environments / big data platforms (Hadoop, Elasticsearch, etc.) as well as common database systems and value stores (SQL, Hive, HBase, etc.). Handon experience in the MLOps (Dockerization, REST APIs and the CI/CD/CT processes). Adaptation of foundation models/LLMs to address specific business challenges. Utilizing version control for maintaining codebase integrity and collaboration, fostering a collaborative and errorfree development environment. Design, deploy and manage promptbased models on LLMs for various NLP tasks. Build and maintain data pipelines and data processing workflows for prompt engineering on LLMs utilizing cloud services for scalability and efficiency. Familiarity with LLM orchestration and agentic AI libraries. Good understanding of business and ability to translate domain problems to data science problem. Ability to communicate effectively with both technical and nontechnical stakeholders. Mandatory skill sets Python, GenAI ,Machine learning, data science Preferred skill sets Data analysis, SQL, MLOPS Years of experience required 5+ Education qualification BE/B.Tech/MBA/MCA Education Degrees/Field of Study required Master of Business Administration, Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Large Language Model (LLM) FineTuning, Python for Data Analysis Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, DataDriven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline {+ 38 more} Travel Requirements Available for Work Visa Sponsorship

Posted 1 week ago

Apply

0.0 - 1.0 years

2 - 5 Lacs

Bengaluru

Work from Office

karmanX - Expert-Led Career Programs in Full Stack & Data Analytics About the job Company Description karmanX is looking for some highly-dedicated Data Analysts to analyse student data and get the dynamic dashboards created with insights. Role Description This is a full-time remote role for a Data Analyst at karmanX. The Data Analyst will be responsible for analyzing data, creating data models, and utilizing statistical methods to interpret and present findings. The role will also involve effective communication of complex data-driven insights. Qualifications Analytical Skills, Data Analytics, and Statistics Strong Communication skills Data Modeling expertise Bachelors or Masters degree in Data Science, Statistics, Mathematics, or related field Experience with data visualization tools Strong problem-solving abilities Ability to work independently and collaboratively in a remote team Passion to deal with data sets 0-1 years of experience Remote work Current Employment Status How many years of work experience do you have

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies