Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks, with deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 3 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 3 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks, with deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop a thorough understanding of the data science lifecycle, including data exploration, preprocessing, modelling, validation, and deployment Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. Significant data analysis experience using python, SQL and spark. Experience in python or spark to write scripts for data transformation, integration and automation tasks. 3-6 years’ experience with cloud ML (AWS) or any similar tools. Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Strong Experience with statistical analytical techniques, data mining and predictive models. Conduct A/B testing and other model validation techniques to ensure the accuracy and reliability of data models.. Experience with optimization modelling, machine learning, forecasting and/or natural language processing. Hands-on experience with Amazon S3 data storage, data lifecycle policies, and integration with other AWS services. Maintain, optimize, and scale AWS Redshift clusters to ensure efficient data storage, retrieval, and query performance Utilize Amazon S3 to store raw data, manage large datasets, and integrate with other AWS services to ensure secure, scalable, and cost-effective data solutions. Experience in implementing CI/CD Pipelines in AWS. At least 4+ years of experience in Database Design and Dimension modelling using SQL Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong communication skills to effectively collaborate with team members and present findings to stakeholders. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Apply domain knowledge (if applicable) in financial fraud to enhance predictive modelling and anomaly detection capabilities. Knowledge of AWS IAM for managing secure access to data resources. Familiarity with DevOps practices and automation tools like Terraform or CloudFormation. Experience with data visualization tools like Quick Sight or integrating Redshift data with BI tools (Tableau, PowerBI, etc.). AWS certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop a thorough understanding of the data science lifecycle, including data exploration, preprocessing, modelling, validation, and deployment Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. Significant data analysis experience using python, SQL and spark. Experience in python or spark to write scripts for data transformation, integration and automation tasks. 3-6 years’ experience with cloud ML (AWS) or any similar tools. Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Strong Experience with statistical analytical techniques, data mining and predictive models. Conduct A/B testing and other model validation techniques to ensure the accuracy and reliability of data models.. Experience with optimization modelling, machine learning, forecasting and/or natural language processing. Hands-on experience with Amazon S3 data storage, data lifecycle policies, and integration with other AWS services. Maintain, optimize, and scale AWS Redshift clusters to ensure efficient data storage, retrieval, and query performance Utilize Amazon S3 to store raw data, manage large datasets, and integrate with other AWS services to ensure secure, scalable, and cost-effective data solutions. Experience in implementing CI/CD Pipelines in AWS. At least 4+ years of experience in Database Design and Dimension modelling using SQL Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong communication skills to effectively collaborate with team members and present findings to stakeholders. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Apply domain knowledge (if applicable) in financial fraud to enhance predictive modelling and anomaly detection capabilities. Knowledge of AWS IAM for managing secure access to data resources. Familiarity with DevOps practices and automation tools like Terraform or CloudFormation. Experience with data visualization tools like Quick Sight or integrating Redshift data with BI tools (Tableau, PowerBI, etc.). AWS certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop a thorough understanding of the data science lifecycle, including data exploration, preprocessing, modelling, validation, and deployment Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. Significant data analysis experience using python, SQL and spark. Experience in python or spark to write scripts for data transformation, integration and automation tasks. 3-6 years’ experience with cloud ML (AWS) or any similar tools. Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Strong Experience with statistical analytical techniques, data mining and predictive models. Conduct A/B testing and other model validation techniques to ensure the accuracy and reliability of data models.. Experience with optimization modelling, machine learning, forecasting and/or natural language processing. Hands-on experience with Amazon S3 data storage, data lifecycle policies, and integration with other AWS services. Maintain, optimize, and scale AWS Redshift clusters to ensure efficient data storage, retrieval, and query performance Utilize Amazon S3 to store raw data, manage large datasets, and integrate with other AWS services to ensure secure, scalable, and cost-effective data solutions. Experience in implementing CI/CD Pipelines in AWS. At least 4+ years of experience in Database Design and Dimension modelling using SQL Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong communication skills to effectively collaborate with team members and present findings to stakeholders. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Apply domain knowledge (if applicable) in financial fraud to enhance predictive modelling and anomaly detection capabilities. Knowledge of AWS IAM for managing secure access to data resources. Familiarity with DevOps practices and automation tools like Terraform or CloudFormation. Experience with data visualization tools like Quick Sight or integrating Redshift data with BI tools (Tableau, PowerBI, etc.). AWS certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
8.0 - 13.0 years
7 - 12 Lacs
Noida
Work from Office
Job Overview We are looking for a Data Engineer who will be part of our Analytics Practice and will be expected to actively work in a multi-disciplinary fast paced environment. This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project; its primary responsibility is the acquisition, transformation, loading and processing of data from a multitude of disparate data sources, including structured and unstructured data for advanced analytics and machine learning in a big data environment. Responsibilities: Engineer a modern data pipeline to collect, organize, and process data from disparate sources. Performs data management tasks, such as conduct data profiling, assess data quality, and write SQL queries to extract and integrate data Develop efficient data collection systems and sound strategies for getting quality data from different sources Consume and analyze data from the data pool to support inference, prediction and recommendation of actionable insights to support business growth. Design and develop ETL processes using tools and scripting. Troubleshoot and debug ETL processes. Performance tuning and opitimization of the ETL processes. Provide support to new of existing applications while recommending best practices and leading projects to implement new functionality. Collaborate in design reviews and code reviews to ensure standards are met. Recommend new standards for visualizations. Learn and develop new ETL techniques as required to keep up with the contemporary technologies. Reviews the solution requirements and architecture to ensure selection of appropriate technology, efficient use of resources and integration of multiple systems and technology. Support presentations to Customers and Partners Advising on new technology trends and possible adoption to maintain competitive advantage Experience Needed: 8+ years of related experience is required. A BS or Masters degree in Computer Science or related technical discipline is required ETL experience with data integration to support data marts, extracts and reporting Experience connecting to varied data sources Excellent SQL coding experience with performance optimization for data queries. Understands different data models like normalized, de-normalied, stars, and snowflake models. Worked with transactional, temporarl, time series, and structured and unstructured data. Experience on Azure Data Factory and Azure Synapse Analytics Worked in big data environments, cloud data stores, different RDBMS and OLAP solutions. Experience in cloud-based ETL development processes. Experience in deployment and maintenance of ETL Jobs. Is familiar with the principles and practices involved in development and maintenance of software solutions and architectures and in service delivery. Has strong technical background and remains evergreen with technology and industry developments.At least 3 years of demonstrated success in software engineering, release engineering, and/or configuration management.Highly skilled in scripting languages like PowerShell.Substantial experience in the implementation and exectuion fo CI/CD processes. Additional Requirements Demonstrated ability to have successfully completed multiple, complex technical projects Prior experience with application delivery using an Onshore/Offshore model Experience with business processes across multiple Master data domains in a services based company Demonstrates a rational and organized approach to the tasks undertaken and an awareness of the need to achieve quality. Demonstrates high standards of professional behavior in dealings with clients, colleagues and staff. Is able to make sound and far reaching decisions alone on major issues and to take full responsibility for them on a technical basis. Strong written communication skills. Is effective and persuasive in both written and oral communication. Experience with gathering end user requirements and writing technical documentation Time management and multitasking skills to effectively meet deadlines under time-to-market pressureConduent is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, creed, religion, ancestry, national origin, age, gender identity, gender expression, sex/gender, marital status, sexual orientation, physical or mental disability, medical condition, use of a guide dog or service animal, military/veteran status, citizenship status, basis of genetic information, or any other group protected by law.People with disabilities who need a reasonable accommodation to apply for or compete for employment with Conduent may request such accommodation(s) by submitting their request through this form that must be downloaded:. Complete the form and then email it as an attachment to.You may also.At Conduent we value the health and safety of our associates, their families and our community. For US applicants while we DO NOT require vaccination for most of our jobs, we DO require that you provide us with your vaccination status, where legally permissible. Providing this information is a requirement of your employment at Conduent.
Posted 3 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day to day project execution. Outcomes Interpret the application feature and component designs to develop the same in accordance with specifications. Code debug test document and communicate product component and feature development stages. Validate results with user representatives integrating and commissions the overall solution. Select and create appropriate technical options for development such as reusing improving or reconfiguration of existing components while creating own solutions for new contexts Optimises efficiency cost and quality. Influence and improve customer satisfaction Influence and improve employee engagement within the project teams Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues Percent of voluntary attrition On time completion of mandatory compliance trainings Code Outputs Expected: Code as per the design Define coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation Requirements test cases and results Configure Define and govern configuration management plan Ensure compliance from the team Test Review/Create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise software developers on design and development of features and components with deeper understanding of the business problem being addressed for the client Learn more about the customer domain and identify opportunities to provide value addition to customers Complete relevant domain certifications Manage Project Support Project Manager with inputs for the projects Manage delivery of modules Manage complex user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort and size estimation and plan resources for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for applications features business components and data models Interface With Customer Clarify requirements and provide guidance to Development Team Present design options to customers Conduct product demos Work closely with customer architects for finalizing design Manage Team Set FAST goals and provide feedback Understand aspirations of the team members and provide guidance opportunities etc Ensure team members are upskilled Ensure team is engaged in project Proactively identify attrition risks and work with BSE on retention measures Certifications Obtain relevant domain and technology certifications Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort resources required for developing / debugging features / components Perform and evaluate test in the customer or target environments Make quick decisions on technical/project related challenges Manage a team mentor and handle people related issues in team Have the ability to maintain high motivation levels and positive dynamics within the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback for team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning handling multiple tasks. Build confidence with customers by meeting the deliverables timely with a quality product. Estimate time and effort of resources required for developing / debugging features / components Knowledge Examples Appropriate software programs / modules Functional & technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Broad knowledge of customer domain and deep knowledge of sub domain where problem is solved Additional Comments Key Responsibilities: Experience - 8yrs Develop, maintain, and optimize complex PL/SQL procedures, functions, and packages for data extraction, transformation, and loading (ETL). Design, develop, and deploy data pipelines and workflows using Python. Build and manage Snowflake data warehouse environments and optimize performance. Create, manage, and maintain OLAP cubes to support reporting and analytical needs. Ensure data quality, consistency, and integrity across various systems and data sources. Collaborate with business stakeholders to gather and understand data requirements, especially related to financial datasets. Assist in financial data modeling and reporting efforts, ensuring compliance with industry standards and internal governance. Support data governance and data management best practices across teams. Mandatory Skillsets PL/SQL Python Snowflake Cubes and data creation Good Understanding of data and data management Some knowledge on financial data and systems Skills PL/SQL,Python,Snowflake,Cubes&Datacreation Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS HIRING!! ROLE: AWS Data Architect YEAR OF EXP: 8 + YEARS LOCATION: Chennai/ Pune / Bangalore / Hyd Data Architect: Must have: Relational SQL/ Caching expertise – Deep knowledge of Amazon Aurora PostgreSQL, ElastiCache etc.. Data modeling – Experience in OLTP and OLAP schemas, normalization, denormalization, indexing, and partitioning. Schema design & migration – Defining best practices for schema evolution when migrating from SQL Server to PostgreSQL. Data governance – Designing data lifecycle policies, archival strategies, and regulatory compliance frameworks. AWS Glue & AWS DMS – Leading data migration strategies to Aurora PostgreSQL. ETL & Data Pipelines – Expertise in Extract, Transform, Load (ETL) workflows . Glue jobs features and event-driven architectures. Data transformation & mapping – PostgreSQL PL/pgSQL migration / transformation expertise while ensuring data integrity. Cross-platform data integration – Connecting cloud and on-premises / other cloud data sources. AWS Data Services – Strong experience in S3, Glue, Lambda, Redshift, Athena, and Kinesis. Infrastructure as Code (IaC) – Using Terraform, CloudFormation, or AWS CDK for database provisioning. Security & Compliance – Implementing IAM, encryption (AWS KMS), access control policies, and compliance frameworks (eg. GDPR ,PII). Query tuning & indexing strategies – Optimizing queries for high performance. Capacity planning & scaling – Ensuring high availability, failover mechanisms, and auto-scaling strategies. Data partitioning & storage optimization – Designing cost-efficient hot/cold data storage policies. Should have experience with setting up the AWS architecture as per the project requirements Good to have: Data Warehousing – Expertise in Amazon Redshift, Snowflake, or BigQuery. Big Data Processing – Familiarity with Apache Spark, EMR, Hadoop, or Kinesis. Data Lakes & Analytics – Experience in AWS Lake Formation, Glue Catalog, and Athena. Machine Learning Pipelines – Understanding of SageMaker, BedRock etc. for AI-driven analytics. CI/CD for Data Pipelines – Knowledge of AWS CodePipeline, Jenkins, or GitHub Actions. Serverless Data Architectures – Experience with event-driven systems (SNS, SQS, Step Functions). Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Excellent Opportunity for Data Modeler for Kolkata. Skill Required: Data Modeling, SAS, Oracle Data Base, Banking Domain. Experience in Analysing, Designing and Developing Business Intelligence (BI) database applications using MS SQL Server, SSAS. Experience in analysing, processing SSAS cubes to store data to OLAP databases Good at building MDX queries and Data Mining Expression (DMX) queries for Analysis Services cubes& Reporting Services. Solid understanding of Data Modelling, Evaluating Data Sources and Data Warehouse/Data Mart Design, ETL, BI, OLAP, Client/Server applications. #datamodeler #SAS #Banking Please share CV on nitin.panchal@inspiraenterprise.com Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description UST is looking for a highly skilled Power BI Lead with extensive hands-on experience in SQL development, query optimisation, data mapping, and data warehousing. This role requires a strong technical foundation in database performance tuning, a deep understanding of ETL processes, and the ability to align data strategies with business objectives. The ideal candidate will have a proven track record of leading data-driven initiatives, collaborating with cross-functional teams, and ensuring efficient data processing at scale. Key Responsibilities Develop, optimise, and maintain complex SQL queries for high-performance data retrieval and processing. Analyse and fine-tune query performance, optimising execution plans, indexing strategies, and partitioning. Define and implement data mappings across multiple systems, ensuring accuracy and consistency. Work with stakeholders to translate business requirements into ETL pipelines and data transformation logic. 8+ years of hands-on experience in SQL development and query optimisation. Strong expertise in data warehousing concepts, ETL processes, and database performance tuning. Proven ability to map, transform, and integrate data across multiple sources. Excellent problem-solving skills with a keen eye for performance and scalability. Strong communication skills, with the ability to translate complex technical concepts into business terms. Required Skills And Experience Over 8 + years of experience in developing BI applications utilising SQL server/ SF/ GCP/ PostgreSQL, BI stack, Power BI, and Tableau. Practical understanding of the Data modelling (Dimensional & Relational) concepts like Star-Schema Modelling, Snowflake Schema Modelling, Fact and Dimension tables. Ability to translate the business requirements into workable functional and non-functional requirements. Capable of taking ownership and communicating with C Suite executives & Stakeholders. Extensive database programming experience in writing T-SQL, User Defined Functions, Triggers, Views, Temporary Tables Constraints, and Indexes using various DDL and DML commands. Experienced in creating SSAS based OLAP Cubes and writing complex DAX. Ability to work with external tools like Tabular Editor and DAX Studio. Understand complex and customise Stored Procedures and Queries for implementing business logic and process in backend, for data extraction. Hands on experience in Incremental refresh, RLS, Parameterization, Dataflows and Gateways. Experience in Design, development of Business Intelligence Solutions using SSRS and Power BI Experience in optimisation of PBI reports implementing Mixed and Direct Query modes. Skills Power Bi,Databricks,MS Fabric,UI/UX Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Role Summary The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. We are looking for a technically skilled and experienced Reporting Engineering Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 5+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in common BI tools, such as Tableau, PowerBI, etc.. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less
Posted 3 weeks ago
9.0 years
0 Lacs
Greater Chennai Area
On-site
The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. Role Summary We are looking for a technically skilled and experienced Reporting Engineering Senior Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a lead Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Responsible for BI solution architecture design and implementation. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. 9+ years Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 9+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in industry common BI tools, such as Tableau, PowerBI, etc. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less
Posted 3 weeks ago
9.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. Role Summary We are looking for a technically skilled and experienced Reporting Engineering Senior Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a lead Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Responsible for BI solution architecture design and implementation. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. 9+ years Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 9+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in industry common BI tools, such as Tableau, PowerBI, etc. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics – Senior - IICS Developer EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity A Senior Designer and Developer working with Informatica Intelligent Cloud Services (IICS) in roles involving multiple sources such as files and tables typically has a broad set of responsibilities centered around designing, developing, and managing complex data integration workflows. Their role spans across multiple data sources, including databases, files, cloud storage, and APIs, to ensure seamless data movement and transformation for analytics and business intelligence purposes. Key Roles and Responsibilities of an IICS Senior Designer and Developer Designing and Developing Data Integration Solutions Develop and design ETL (Extract, Transform, Load) mappings and workflows using Informatica Cloud IICS, integrating data from various sources such as files, multiple database tables, cloud storage, and APIs through ODBC and REST connectors. Configure synchronization tasks that may involve multiple database tables as sources, ensuring efficient data extraction and loading. Build reusable, parameterized mapping templates to handle different data loads including full, incremental, and CDC (Change Data Capture) loads. Handling Multiple Data Sources Work with structured, semi-structured, and unstructured data sources including Oracle, SQL Server MI, Azure Data Lake, Azure Blob Storage, Sales Force Net zero, Snowflake, and other cloud/on-premises platforms. Manage file ingestion tasks to load large datasets from on-premises systems to cloud data lakes or warehouses. Use various cloud connectors and transformations (e.g., Aggregator, Filter, Joiner, Lookup, Rank, Router) to process and transform data efficiently. Data Quality, Governance, and Documentation Implement data quality and governance policies to ensure data accuracy, integrity, and security throughout the data integration lifecycle. Create and maintain detailed documentation such as source-to-target mappings, ETL design specifications, and data migration strategies. Develop audit frameworks to track data loads and support compliance requirements like SOX. Project Planning and Coordination Plan and monitor ETL development projects, coordinate with cross-functional teams including system administrators, DBAs, data architects, and analysts to align on requirements and deliverables. Communicate effectively across organizational levels to report progress, troubleshoot issues, and coordinate deployments. Performance Tuning and Troubleshooting Optimize ETL workflows and mappings for performance, including tuning SQL/PLSQL queries and Informatica transformations. Troubleshoot issues using IICS frameworks and collaborate with Informatica support as needed. Leadership and Mentoring (Senior Role Specific) Oversee design and development efforts, review work of junior developers, and ensure adherence to best practices and standards. Lead the creation of ETL standards, naming conventions, and methodologies to promote consistency and reusability across projects. Summary Of Skills And Tools Commonly Used Informatica Intelligent Cloud Services (IICS), Informatica Cloud Data Integration (CDI) Should be having 6-9 years of experience SQL MI, PL/SQL, API integrations (REST V2), ODBC connections, Flat Files , ADLS, Sales Force Netzero Cloud platforms: Azure Data Lake, Azure Synapse (SQL Data Warehouse), Snowflake, AWS Redshift Data modelling and warehousing concepts including OLAP, Star and Snowflake schemas Data quality tools and scripting languages such as Python, R, or SAS for advanced analytics support Project management and documentation tools, strong communication skills In essence, a Senior IICS Designer and Developer role is a blend of technical expertise in data integration across multiple heterogeneous sources (files, tables, APIs), project leadership, and ensuring high-quality, scalable data pipelines that support enterprise BI and analytics initiatives. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics – Senior - IICS Developer EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity A Senior Designer and Developer working with Informatica Intelligent Cloud Services (IICS) in roles involving multiple sources such as files and tables typically has a broad set of responsibilities centered around designing, developing, and managing complex data integration workflows. Their role spans across multiple data sources, including databases, files, cloud storage, and APIs, to ensure seamless data movement and transformation for analytics and business intelligence purposes. Key Roles and Responsibilities of an IICS Senior Designer and Developer Designing and Developing Data Integration Solutions Develop and design ETL (Extract, Transform, Load) mappings and workflows using Informatica Cloud IICS, integrating data from various sources such as files, multiple database tables, cloud storage, and APIs through ODBC and REST connectors. Configure synchronization tasks that may involve multiple database tables as sources, ensuring efficient data extraction and loading. Build reusable, parameterized mapping templates to handle different data loads including full, incremental, and CDC (Change Data Capture) loads. Handling Multiple Data Sources Work with structured, semi-structured, and unstructured data sources including Oracle, SQL Server MI, Azure Data Lake, Azure Blob Storage, Sales Force Net zero, Snowflake, and other cloud/on-premises platforms. Manage file ingestion tasks to load large datasets from on-premises systems to cloud data lakes or warehouses. Use various cloud connectors and transformations (e.g., Aggregator, Filter, Joiner, Lookup, Rank, Router) to process and transform data efficiently. Data Quality, Governance, and Documentation Implement data quality and governance policies to ensure data accuracy, integrity, and security throughout the data integration lifecycle. Create and maintain detailed documentation such as source-to-target mappings, ETL design specifications, and data migration strategies. Develop audit frameworks to track data loads and support compliance requirements like SOX. Project Planning and Coordination Plan and monitor ETL development projects, coordinate with cross-functional teams including system administrators, DBAs, data architects, and analysts to align on requirements and deliverables. Communicate effectively across organizational levels to report progress, troubleshoot issues, and coordinate deployments. Performance Tuning and Troubleshooting Optimize ETL workflows and mappings for performance, including tuning SQL/PLSQL queries and Informatica transformations. Troubleshoot issues using IICS frameworks and collaborate with Informatica support as needed. Leadership and Mentoring (Senior Role Specific) Oversee design and development efforts, review work of junior developers, and ensure adherence to best practices and standards. Lead the creation of ETL standards, naming conventions, and methodologies to promote consistency and reusability across projects. Summary Of Skills And Tools Commonly Used Informatica Intelligent Cloud Services (IICS), Informatica Cloud Data Integration (CDI) Should be having 6-9 years of experience SQL MI, PL/SQL, API integrations (REST V2), ODBC connections, Flat Files , ADLS, Sales Force Netzero Cloud platforms: Azure Data Lake, Azure Synapse (SQL Data Warehouse), Snowflake, AWS Redshift Data modelling and warehousing concepts including OLAP, Star and Snowflake schemas Data quality tools and scripting languages such as Python, R, or SAS for advanced analytics support Project management and documentation tools, strong communication skills In essence, a Senior IICS Designer and Developer role is a blend of technical expertise in data integration across multiple heterogeneous sources (files, tables, APIs), project leadership, and ensuring high-quality, scalable data pipelines that support enterprise BI and analytics initiatives. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description We are looking for a highly capable Senior Full Stack engineer to be a core contributor in developing our suite of product offerings. If you love working on complex problems, and writing clean code, you will love this role. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze semi-structured data from different sources (20 million+ products from 500+ websites into our catalog of 500 million+ products). We help our customers discover new patterns in their data that can be leveraged so that they can become more competitive and increase their revenue. Essential Functions Think like our customers – you will work with product and engineering leaders to define intuitive solutions Designing customer-facing UI and back-end services for various business processes. Developing high-performance applications by writing testable, reusable, and efficient code. Implementing effective security protocols, data protection measures, and storage solutions. Improve the quality of our solutions – you will hold yourself and your team members accountable to writing high quality, well-designed, maintainable software Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Guiding and mentoring others on the team Technologies We Use Languages: NodeJS/NestJS/Typescript, SQL, React/Redux, GraphQL Infrastructure: AWS, Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Databases: Postgres, MongoDB, Redis, Elasticsearch, Trino, Iceberg Streaming and Queuing: Kafka, NATS, Keda Qualifications 6+ years of professional software engineering/development experience. Proficiency with architecting and delivering solutions within a distributed software platform Full stack engineering experience, including front end frameworks (React/Typescript, Redux) and backend technologies such as NodeJS/NestJS/Typescript, GraphQL Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Proven ability to work and effectively, prioritize and organize your work in a highly dynamic environment Proven track record of working in highly distributed event driven systems. Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Trino, etc.) Solid understanding of Data Pipeline and Workflow Automation – orchestration tools, scheduling and monitoring Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of Data Lakes, Data Warehouses, and modeling practices (Data Vault, etc.) and experience leveraging data lake solutions (e.g. AWS Glue, DBT, Trino, Iceberg, etc.) . Ability to clean, transform, and aggregate data using SQL or scripting languages Ability to design and estimate tasks, coordinate work with other team members during iteration planning Solid understanding of AWS, Linux and infrastructure concepts Track record of lifting and challenging teammates to higher levels of achievement Experience measuring, driving and improving the software engineering process Good testing habits and strong eye for quality. Outstanding organizational, communication, and relationship building skills conducive to driving consensus; able to work well in a cross-functional environment Experience working in an agile team environment Ownership – feel a sense of personal accountability/responsibility to drive execution from start to finish. Drive adoption of Wiser's Product Delivery organization principles across the department. Bonus Points Experience with CQRS Experience with Domain Driven Design Experience with C4 modeling Experience working within a retail or ecommerce environment Experience with AI Coding Agents (Windsurf, Cursor, Claude, ChatGPT, etc) – Prompt Engineering Why Join Wiser Solutions? Work on an industry-leading product trusted by top retailers and brands. Be at the forefront of pricing intelligence and data-driven decision-making. A collaborative, fast-paced environment where your impact is tangible. Competitive compensation, benefits, and career growth opportunities. Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics. Show more Show less
Posted 3 weeks ago
4.0 - 8.0 years
6 - 12 Lacs
Hyderabad, Kakinada
Work from Office
Job Description for SQL Server Developer: We are looking for a Senior MS SQL developer who will be responsible for designing databases and ensuring their stability, reliability, and performance. You will also work other developers optimizing in-application SQL statements as necessary, and establishing best practices. You will help solve all database usage issues and come up with ideas and advice that can help avoid such problems in the future. Roles and responsibilities Design, develop, and maintain complex SQL queries, stored procedures, and functions. Perform database optimization and performance tuning to ensure optimal system efficiency. Apply data modelling techniques to ensure development and implementation support efforts meet integration and performance expectations Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end. Refine and automate regular processes, track issues, and document changes. Assist developers with complex query tuning and schema refinement, collaborate with the development team to integrate database components into applications. Provide 24x7 support for critical production systems. Create and maintain documentation for database processes and procedures. Education Bachelors degree in computer science, information systems, or a related field Required Skills and Experience 5+ years MS SQL Server experience required. Experience with Performance Tuning and Optimization (PTO), using native monitoring and troubleshooting tools. Experience working with Windows server, including Active Directory Familiarity with Azure is a plus.
Posted 3 weeks ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary NetApp is a cloud-led, data-centric software company that helps organizations put data to work in applications that elevate their business. We help organizations unlock the best of cloud technology. As a member of Solutions Integration Engineering you work cross-functionally to define and create engineered solutions /products which would accelerate the field adoption. We work closely with ISV’s and with the startup ecosystem in the Virtualization, Cloud, and AI/ML domains to build solutions that matter for the customers You will work closely with product owner and product lead on the company's current and future strategies related to said domains. Job Requirements • Deliver features, including participating in the full software development lifecycle. • Deliver reliable, innovative solutions and products • Participate in product design, development, verification, troubleshooting, and delivery of a system or major subsystems, including authoring project specifications. • Work closely with cross-functional teams including business stakeholders to innovate and unlock new use-cases for our customers • Write unit and automated integrationtests and project documentation Technical Skills: • Understanding of Software development lifecycle • Proficiency in full stack development ~ Python, Container Ecosystem, Cloud and Modern ML frameworks • Knowledge of Data storage and Artificial intelligence concepts including server/storage architecture, batch/stream processing, data warehousing, data lakes, distributed filesystems, OLTP/OLAP databases and data pipelining tools, model, inferencing as well as RAG workflows. • Exposure on Data pipeline, integrations and Unix based operating system kernels and development environments, e.g. Linux or FreeBSD. • A strong understanding of basic to complex concepts related to computer architecture, data structures, and new programming paradigms • Demonstrated creative and systematic approach to problem solving. • Possess excellent written and verbal communication skills. Education • Minimum 5 years of experience and must be hands-on with coding. • B.E/B.Tech or M.S in Computer Science or related technical field.
Posted 3 weeks ago
8.0 - 12.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 3 weeks ago
5.0 - 10.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Key Requirements: Required Skills: Minimum 5+ years of experience with UI validation, and automation tools like Selenium and Playwright. Experience in Functional testing Expert knowledge of SQL programming languages. Experience in writing complex SQL queries and stored procedures Experience and understanding of all three sides of Data (Engineering, Analytics, Visualization) Relevant work experience in analyzing data from various formats and sources Knowledge of data warehousing, OLAP, multi-dimensional, star and snowflake schemas Understanding of ETL methodologies and Data Warehousing principles, approaches, technologies, and architectures including the concepts, designs, and usage of data warehouses and data marts Experience in understanding the source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting Experience with automated testing and coverage tools Experience with CI/CD automation tools (desirable) Good to have Functional Automation knowledge using selenium (Cucumber Junit)
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose We are looking for a Senior SQL Developer to join our growing team of BI & analytics experts. The hire will be responsible for expanding and optimizing our data and data queries, as well as optimizing data flow and collection for consumption by our BI & Analytics platform. The ideal candidate is an experienced data querying builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The SQL Developer will support our software developers, database architects, data analysts and data scientists on data and product initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple systems and products. The right candidate will be excited by the prospect of optimizing our company’s data architecture to support our next generation of products and data initiatives. Job Responsibilities Essential Functions: Requirements Create and maintain optimal SQL queries, Views, Tables, Stored Procedures. Work together with various business units (BI, Product, Reporting) to develop data warehouse platform vision, strategy, and roadmap. Understand the development of physical and logical data models. Ensure high-performance access to diverse data sources. Encourage the adoption of an organization’s frameworks by providing documentation, sample code, and developer support. Communicate progress on the adoption and effectiveness of the developed frameworks to department head and managers. Required Education And Experience Bachelor’s or Master’s degree or equivalent combination of education and experience in relevant field. Understanding of T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL and ETL Experiencing in Creating Table, Views, Stored Procedures. Understanding of several BI and Reporting Platforms, and be aware of industry trends and direction in BI/reporting and applicability to the organization’s product strategies. Skilled in multiple database platforms, including SQL Server and MySQL. Knowledgeable of Source Control and Project Management tools like Azure DevOps, Git, and JIRA Familiarity of using SonarQube for clean coding T-SQL practices. Familiarity with DevOps best practices and automation of documentation, testing, build, deployment, configuration, and monitoring Communication skills: It is vital that applicants have exceptional written and spoken communication skills with active listening abilities to contribute in making strategic decisions and advise senior management on specialized technical issues, which will have an impact on the business Strong team building skills: it is crucial that they also have team building ability to provide direction for complex projects, mentor junior team members, and communicate the organization’s preferred technologies and frameworks across development teams. Experience: A candidate for this position must have had at least 5+ years working in a data warehousing position within a fast-paced and complex business environment, working as a SQL Developer. The candidate must also have had experience developing schema data models in a data warehouse environment. The candidate will also have had experience with full implementation of system development lifecycle (SDLC). The candidate must also have a proven and successful experience working with concepts of data integration, consolidation, enrichment, and aggregation. A suitable candidate will also have a strong demonstrated understanding of dimensional modeling and similar data warehousing techniques as well as having experience working with relational or multi-dimensional databases and business intelligence architectures. Analytical Skills: As expected, a candidate for the position will have passion as well as skill in research and analytics as well as a passion for data management tools and technologies. The candidate must have an ability to perform detailed data analysis, for example, in determining the content, structure, and quality of data through the examination of data samples and source systems. The hire will additionally have the ability to troubleshoot data warehousing issues and quickly resolve them. Expected Competencies Detailed oriented with strong organizational skills Ability to pay attention to programming style and neatness Strong English communication skills, both written and verbal Ability to train, mentor junior colleagues with patience with tangible results Work Timings This is a full-time position. Days and hours of work are Monday through Friday, and should be flexible to support different time zones ranging between 12 PM IST to 9PM IST, Work schedule may include evening hours or weekends due to client needs per manager instructions This role will be working in Hybrid Mode and will require at least 2 days’ work from office at Hyderabad. Occasional evening and weekend work may be expected in case of job-related emergencies or client needs. EEO Statement Cendyn provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Cendyn complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Cendyn expressly prohibits any form of workplace harassment based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Improper interference with the ability of Cendyn’s employees to perform their job duties may result in discipline up to and including discharge. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Show more Show less
Posted 3 weeks ago
0.0 - 5.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Design, implement, and support a platform providing ad-hoc access to large data sets Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers - 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Posted 3 weeks ago
2.0 - 4.0 years
10 - 20 Lacs
Pune
Work from Office
What s the role all about? As a BI Developer, you ll be a key contributor to developing Reports in a multi-region, multi-tenant SaaS product. You ll collaborate with the core R&D team to build high-performance Reports to serve the use cases of several applications in the suite. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product s requirements and its market positioning. Work closely with cross-functional teams (Product Managers, Sales, Customer Support, and Services) to ensure successful product delivery. Design and build Reports for given requirements. Create design documents, test cases for the reports Develop SQL to address the adhoc report requirements, conduct analyses Create visualizations and reports as per the requirements Execute unit testing, functional & performance testing and document the results Conduct peer reviews and ensure quality is met at all stages Have you got what it takes? Bachelor/Master of Engineering Degree in Computer Science, Electronic Engineering or equivalent from reputed institute 2-4 years of BI report development experience Expertise in SQL & any cloud-based databases. Would be able to work with any DB to write SQL for any business needs. Experience in any BI tools like Tableau, Power BI, MicroStrategy etc.. Experience working in enterprise Data warehouse/ Data Lake system Strong knowledge of Analytical Data base and schemas Development experience building solutions that leverage SQL and NoSQL databases. Experience/Knowledge of Snowflake an advantage. In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Experience working in functional testing, Performance testing etc.. Experience with public cloud infrastructure and technologies such as AWS/Azure/GCP etc Experience working in Continuous Integration and Delivery practices using industry standard tools such as Jenkins Experience working in an Agile methodology development environment and using work item management tools like JIRA What s in it for you? Enjoy NICE-FLEX! Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Bangalore / Chennai Hands-on data modelling for OLTP and OLAP systems In-depth knowledge of Conceptual, Logical and Physical data modelling Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same Strong understanding of variables impacting database performance for near-real-time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery. People with functional knowledge of the mutual fund industry will be a plus Role & Responsibilities Work with business users and other stakeholders to understand business processes. Ability to design and implement Dimensional and Fact tables Identify and implement data transformation/cleansing requirements Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions Design, develop and maintain ETL workflows and mappings using the appropriate data load technique Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions. Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use. Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality. Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions. Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics. Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions. Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements. Train business end-users, IT analysts, and developers. Required Skills Bachelor’s degree in Computer Science or similar field or equivalent work experience. 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects. Expert with data warehousing concepts, strategies, and tools. Strong SQL background. Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL. Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS). Knowledge of AWS and Azure Cloud is a plus. Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources. Experience in integration using APIs, XML, JSONs etc. Skills:- Data modeling, OLAP, OLTP, bigquery and Google Cloud Platform (GCP) Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing demand for data analysis and business intelligence, OLAP (Online Analytical Processing) jobs have become popular in India. OLAP professionals are responsible for designing, building, and maintaining OLAP databases to support data analysis and reporting activities for organizations. If you are looking to pursue a career in OLAP in India, here is a comprehensive guide to help you navigate the job market.
These cities are known for having a high concentration of IT companies and organizations that require OLAP professionals.
The average salary range for OLAP professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 12 lakhs per annum.
Career progression in OLAP typically follows a trajectory from Junior Developer to Senior Developer, and then to a Tech Lead role. As professionals gain experience and expertise in OLAP technologies, they may also explore roles such as Data Analyst, Business Intelligence Developer, or Database Administrator.
In addition to OLAP expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL (Extract, Transform, Load) processes, data warehousing concepts, and data visualization tools such as Tableau or Power BI.
As you prepare for OLAP job interviews in India, make sure to hone your technical skills, brush up on industry trends, and showcase your problem-solving abilities. With the right preparation and confidence, you can successfully land a rewarding career in OLAP in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.