Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3 - 5 years
11 - 15 Lacs
Hyderabad
Work from Office
Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 2 months ago
5 - 10 years
9 - 13 Lacs
Hyderabad
Work from Office
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Be a founding member of the data engineering team. Help to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins. Define best practices for product development, engineering, and coding as part of a world class engineering team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Qualifications 6-9 years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience in SQL optimization and performance tuning Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with data profiling and data quality tools like Apache Griffin, Deequ, or Great Expectations. Current skills in following technologies Python Orchestration platformsAirflow, Luigi, Databricks, or similar Relational databasesPostgres, MySQL, or equivalents MPP data systemsSnowflake, Redshift, Synapse, or similar Cloud platformsAWS, Azure, or similar Version control (e.g., GitHub) and familiarity with deployment, CI/CD tools. Fluent with Agile processes and tools such as Jira or Pivotal Tracker Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus.
Posted 2 months ago
2 - 5 years
11 - 16 Lacs
Bengaluru
Work from Office
Technical Expert BSI We are looking for Technical Expert to be part of our Business Solutions Integrations team in the Analytics, Data and Integration stream. Position Snapshot LocationBengaluru Type of ContractPermanent Analytics, Data and Integration Type of workHybrid Work LanguageFluent Business English The role The Integration Technical expert will be working in the Business Solution Integration team focused on the Product Engineering and Operations related to Data Integration, Digital integration, and Process Integration the products in the in-Business solution integration and the initiatives where these products are used. Will work together with the Product Manager and Product Owners, as well as various other counterparts in the evolution of the DI, PI, and Digital Products. Will work with architects for orchestrating the design of the integration solutions. Will also a ct as the first point of contact for project teams to manage demand and will help to drive the transition from engineering to sustain as per the BSI standards. Will work with Operations Managers and Sustain teams on the orchestration of the operations activities, proposing improvements for better performance of the platforms. What you’ll do Work with architects to understand and orchestrate the design choices between the different Data, Process and Digital Integration patterns for fulfilling the data needs. Translate the various requirements into the deliverables for the development and implementation of Process, Data and Digital Integration solutions, following up the requests for getting the work done. Design, develop, and implement integration solutions using ADF, LTRS, Data Integration , SAP PO, CPI, Logic Apps MuleSoft, and Confluent. Work with the Operations Managers and Sustain teams for orchestrating performance and operational issues. We offer you We offer more than just a job. We put people first and inspire you to become the best version of yourself. Great benefits including competitive salary and a comprehensive social benefits package. We have one of the most competitive pension plans on the market, as well as flexible remuneration with tax advantageshealth insurance, restaurant card, mobility plan, etc . Personal and professional growth through ongoing training and constant career opportunities reflecting our conviction that people are our most important asset. Minimum qualifications Minimum of 7 years industry experience in software delivery projects Experience in project and product management, agile methodologies and solution delivery at scale. Skilled and experienced Technical Integration Expert with experience various integration platforms and tools, including ADF, LTRS, Data Integration , SAP PO, CPI, Logic Apps, , MuleSoft, and Confluent. Ability to contribute to a high-performing, motivated workgroup by applying interpersonal and collaboration skills to achieve goals. Fluency in English with excellent oral and written communication skills. Experience in working with cultural diversityrespect for various cultures and understanding how to work with a variety of cultures in the most effective way. Bonus Points If You Experience with the Azure platform (especially with Data Factory) Experience with Azure DevOps and with Service Now Experience with Power Apps and Power BI About the IT Hub We are a team of IT professionals from many countries and diverse backgrounds, each with unique missions and challenges in the biggest health, nutrition and wellness company of the world. We innovate every day through forward-looking technologies to create opportunities for Nestl’s digital challenges with our consumers, customers and at the workplace. We collaborate with our business partners around the world to deliver standardized, integrated technology products and services to create tangible business value. About Nestl We are Nestl, the largest food and beverage company. We are approximately 275,000 employees strong, driven by the purpose of enhancing the quality of life and contributing to a healthier future. Our values are rooted in respectrespect for ourselves, respect for others, respect for diversity and respect for our future. With more than CHF 94.4 ?billion sales in 2022, we have an expansive presence, with 344 ?factories in 77 ?countries. Want to learn more? Visit us at www.nestle.com . ?We encourage the diversity of applicants across gender, age, ethnicity, nationality, sexual orientation, social background, religion or belief and disability. Step outside your comfort zone; share your ideas, way of thinking and working to make a difference to the world, every single day. You own a piece of the action – make it count. Join IT Hub Nestl #beaforceforgood How we will proceed You send us your CV ? We contact relevant applicants ? Interviews ? Feedback ? Job Offer communication to the Finalist ? First working day
Posted 2 months ago
10 - 15 years
12 - 16 Lacs
Pune
Work from Office
About The Role The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches, external forums, and COE (Center of Excellence) activities to promote cloud engineering expertise. Lead research, assessments, and development of best practices to keep our cloud engineering solutions at the forefront of technology. Contribute to the growth of the cloud engineering practice through thought leadership, including the creation of white papers and articles. Lead and collaborate on multi-discipline assessments at client sites to identify new cloud-based opportunities. Provide technical leadership in the design and development of robust, scalable cloud architectures. Drive key cloud engineering projects, ensuring high performance, scalability, and adherence to best practices. Design and implement data architectures that address performance, scalability, and data latency requirements. Lead the development of cloud-based solutions, ensuring they are scalable, robust, and aligned with business needs. Anticipate and mitigate data bottlenecks, proposing strategies to enhance data processing efficiency. Provide mentorship and technical guidance to junior team members. Technical and Functional skills: Bachelor’s with 10+ years of experience in data management and cloud engineering. Proven experience in at least 2-3 large-scale cloud implementations within industries such as Retail, Manufacturing, or Technology. Expertise in Azure Cloud, Azure Data Lake, Databricks, Teradata, and ETL technologies. Strong problem-solving skills with a focus on performance optimization and data quality. Ability to collaborate effectively with analysts, subject matter experts, and external partners. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About the Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.
Posted 2 months ago
10 - 15 years
12 - 16 Lacs
Mumbai
Work from Office
About The Role The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches, external forums, and COE (Center of Excellence) activities to promote cloud engineering expertise. Lead research, assessments, and development of best practices to keep our cloud engineering solutions at the forefront of technology. Contribute to the growth of the cloud engineering practice through thought leadership, including the creation of white papers and articles. Lead and collaborate on multi-discipline assessments at client sites to identify new cloud-based opportunities. Provide technical leadership in the design and development of robust, scalable cloud architectures. Drive key cloud engineering projects, ensuring high performance, scalability, and adherence to best practices. Design and implement data architectures that address performance, scalability, and data latency requirements. Lead the development of cloud-based solutions, ensuring they are scalable, robust, and aligned with business needs. Anticipate and mitigate data bottlenecks, proposing strategies to enhance data processing efficiency. Provide mentorship and technical guidance to junior team members. Technical and Functional skills: Bachelor’s with 10+ years of experience in data management and cloud engineering. Proven experience in at least 2-3 large-scale cloud implementations within industries such as Retail, Manufacturing, or Technology. Expertise in Azure Cloud, Azure Data Lake, Databricks, Teradata, and ETL technologies. Strong problem-solving skills with a focus on performance optimization and data quality. Ability to collaborate effectively with analysts, subject matter experts, and external partners. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About the Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.
Posted 2 months ago
6 - 8 years
3 - 7 Lacs
Gurugram
Work from Office
Skills: Bachelors degree / Master's Degree with high rankings from reputed colleges Preferably 6-8 years ETL /Data Analysis experience with a reputed firm Expertise in Big Data Managed Platform Environment like Databricks using Python/ PySpark/ SparkSQL Experience in handling large data volumes and orchestrating automated ETL/ data pipelines using CI/CD and Cloud Technologies. Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued Experience in Data modelling (e.g., database structure, entity relationships, UID etc.) , data profiling, data quality validation. Experience adopting software development best practices (e.g., modularization, testing, refactoring, etc.) Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools Excellent written and verbal communication skills in English Self-motivated with strong sense of problem-solving, ownership and action-oriented mindset Able to cope with pressure and demonstrate a reasonable level of flexibility/adaptability Track record of strong problem-solving, requirement gathering, and leading by example Able to work well within teams across continents/time zones with a collaborative mindset
Posted 2 months ago
4 - 9 years
7 - 17 Lacs
Ahmedabad
Work from Office
Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 2 months ago
4 - 8 years
10 - 18 Lacs
Kochi, Chennai, Bengaluru
Hybrid
Data warehouse developer Experience: 3-8 years Location Chennai/Kochi/Bangalore Responsibilities: Design, build, and maintain scalable and robust data engineering pipelines using Microsoft Azure technologies such as SQL Azure, Azure Data Factory, and Azure Databricks. Develop and optimize data solutions using Azure SQL, PySpark, and PySQL to handle complex data transformation and processing tasks. Implement and manage data storage solutions in One Lake and Azure SQL, ensuring data integrity and accessibility. Work closely with stakeholders to design and build effective reporting and analytics solutions using Power BI and other analytical tools. Collaborate with IT and security teams to integrate solutions within Azure AD and ensure compliance with data security and privacy standards. Contribute to the architectural design of database and lakehouse structures, optimizing for performance and scalability. Utilize .NET frameworks where applicable, to enhance data processing and integration capabilities. Design and implement OLAP and data warehousing solutions, adhering to best practices in data warehouse design concepts. Perform database and query performance tuning and optimizations to ensure high performance and reliability. Stay updated with the latest technologies and trends in big data, proposing and implementing new tools and technologies to improve data systems and processes. Implement unit testing and automation strategies to ensure the reliability and performance of the full-stack application. Conduct thorough code reviews, providing constructive feedback to team members and ensuring adherence to coding standards and best practices. Collaborate with QA engineers to implement and maintain automated testing procedures, including API testing. Work in an Agile environment, participating in sprint planning, daily stand-ups, and retrospective meetings to ensure timely and iterative project delivery. Stay abreast of industry trends and emerging technologies to continuously improve skills and contribute innovative ideas. Requirements: Bachelors degree in computer science, Engineering, or a related field. 3-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including MS Fabric, SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Knowledge of .NET frameworks is highly preferred. Excellent problem-solving, analytical, and communication skills. Bachelors or Masters degree in Computer Science, Engineering, or a related field. Interested candidates can share their resumes at megha.chattopadhyay@aspiresys.com
Posted 2 months ago
5 - 7 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Summary We are seeking a skilled and detail-oriented Azure Data Engineer to join our data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and solutions on the Microsoft Azure cloud platform. You will collaborate with data analysts, the reporting team, and business stakeholders to ensure efficient data availability, quality, and governance. Experience Level: Mid-Level/Senior Must have skills: Strong handson experience with Azure Data Factory , Azure Data Lake Storage , and Azure SQL . Good to have skills: Working knowledge on Databricks, Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Roles and Responsibilities Design and implement scalable data pipelines using Azure Data Factory , Azure SQL , Databricks , and other Azure services. Develop and maintain data lakes and data warehouses on Azure. Integrate data from various on-premises and cloud-based sources. Create and manage ETL/ELT processes , ensuring data accuracy and performance. Optimize and troubleshoot data pipelines and workflows. Ensure data security, compliance, and governance. Collaborate with business stakeholders to define data requirements and deliver actionable insights. Monitor and maintain Azure data services performance and cost-efficiency.
Posted 2 months ago
5 - 10 years
25 - 30 Lacs
Pune
Hybrid
Skills: SQl, ADF, Databricks, SSRS, Power BI, ETL, Data warehousing, MSBI
Posted 2 months ago
5 - 10 years
5 - 10 Lacs
Hyderabad
Work from Office
Greetings from Future Focus Infotech!!! We have multiple opportunities Azure Data Engineer (F2F interview on 17th May (Saturday) Exp: 5+yrs Location : Hyderabad Job Type- This is a Permanent position with Future Focus Infotech Pvt Ltd & you will be deputed with our client. A small glimpse about Future Focus Infotech Pvt Ltd. (Company URL: www.focusinfotech.com) If you are interested in above opportunity, send updated CV and below information to reema.b@focusinfotech.com Kindly mention the below details. Total Years of Experience: Current CTC: Expected CTC: Notice Period : Current location: Available for interview on 17th May (Saturday) : Pan Card : Thanks & Regards, Reema reema.b@focusinfotech.com 8925798887
Posted 2 months ago
4 - 8 years
4 - 8 Lacs
Bengaluru
Work from Office
You Will: Focus on ML model load testing and creation of E2E test cases Evaluate models’ scalability and latency by running suites of metrics under different RPS and creating and automating the test cases for individual models, ensuring a smooth rollout of the models Enhance monitoring of model scalability, and handle incident of increased error rate Collaborate with existing machine learning engineers, backend engineers and QA test engineers from cross-functional teams You Bring: Advanced degree (Master’s or Ph.D.) in Computer Science/Statistics/Data Science, specializing in machine learning 3+ years of industry experience (pls don’t include years in a research group or R&D team) Strong programming skills in languages such as Java, Python, Scala Hands-on Experience in Databricks, mlFlow, Seldon Excellent problem-solving skills and analytical skills Expertise in recommendation algorithms Experience with software engineering principles, and use of cloud services like AWS Preferred Qualifications: Experience in Kubeflow, Tecton, Jenkins Experience in building and monitoring large-scale online customer-facing ML applications, preferably recommendation systems Experience working with custom ML platforms, feature store, and monitoring ML models Familiarity with best practices in machine learning and software engineering Roles and Responsibilities You Will: Focus on ML model load testing and creation of E2E test cases Evaluate models’ scalability and latency by running suites of metrics under different RPS and creating and automating the test cases for individual models, ensuring a smooth rollout of the models Enhance monitoring of model scalability, and handle incident of increased error rate Collaborate with existing machine learning engineers, backend engineers and QA test engineers from cross-functional teams You Bring: Advanced degree (Master’s or Ph.D.) in Computer Science/Statistics/Data Science, specializing in machine learning 3+ years of industry experience (pls don’t include years in a research group or R&D team) Strong programming skills in languages such as Java, Python, Scala Hands-on Experience in Databricks, mlFlow, Seldon Excellent problem-solving skills and analytical skills Expertise in recommendation algorithms Experience with software engineering principles, and use of cloud services like AWS Preferred Qualifications: Experience in Kubeflow, Tecton, Jenkins Experience in building and monitoring large-scale online customer-facing ML applications, preferably recommendation systems Experience working with custom ML platforms, feature store, and monitoring ML models Familiarity with best practices in machine learning and software engineering
Posted 2 months ago
9 - 14 years
30 - 45 Lacs
Chennai
Work from Office
Job Description: We are looking for a highly motivated and skilled L4 Data Engineer Manager, particularly with Databricks experience, is generally responsible for leading data engineering projects, overseeing a team of data engineers, and designing data architectures on platforms like Databricks. Responsibilities: Leadership and Team Management: Lead, mentor, and develop a team of data engineers, ensuring high performance and career growth. Collaborate with stakeholders across data science, analytics, and engineering teams to deliver high-impact data solutions. Oversee the data engineering project lifecycle, from requirements gathering to deployment, ensuring quality and timeliness. Data Architecture and Strategy: Develop scalable data architectures and solutions on Databricks for ETL processes, data warehousing, and big data processing. Define and enforce data governance policies, best practices, and standards for data processing and management. Design data flow pipelines for efficient data ingestion, storage, and processing in cloud environments (e.g., AWS, Azure, GCP). Databricks Management and Optimization: Optimize Databricks clusters for cost and performance efficiency, leveraging cluster scaling and resource management best practices. Implement advanced data transformations and data models within Databricks using Spark and Delta Lake. Ensure integration between Databricks and other data tools, such as data lakes, SQL databases, and BI tools. Data Quality and Security: Monitor and ensure data quality, reliability, and security within the Databricks environment. Implement data validation checks, data profiling, and error-handling mechanisms. Collaborate with security teams to ensure compliance with data privacy regulations and internal security standards. Technical Development and Innovation: Stay updated with the latest Databricks capabilities and cloud technology trends to introduce innovative data engineering practices. Develop reusable and efficient code, libraries, and tools to automate and streamline data workflows. Troubleshoot and resolve complex data pipeline issues and provide continuous improvements in performance and reliability. Skills: Advanced experience with Databricks, PySpark and data pipeline frameworks. Proficiency in Python, SQL, and/or Scala. Familiarity with cloud platforms like AWS, Azure, or GCP. 8+ years in data engineering, with 2+ years in a leadership or managerial role. Strong experience in data architecture, data pipeline optimization, and data modeling. Proven experience in managing large-scale data processing systems and ETL pipelines. Familiarity with Airflow for workflow orchestration and experience with Linux administration and shell scripting. Excellent communication skills to collaborate effectively with cross-functional teams. Strong problem-solving abilities and attention to detail.
Posted 2 months ago
7 - 10 years
9 - 12 Lacs
Noida
Work from Office
Position summary We are seeking a Staff Data Engineer with 7-10 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company Key duties & responsibilities Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Qualification B.E/B. Tech/MCA or equivalent professional degree Experience, Skills and Knowledge Deep knowledge and experience working with SSIS, T-SQL Experienced in Azure data factory, Azure Data bricks & Azure Data Lake. Experience working with any language like Python/SCALA Experience working with SQL and NoSQL database systems such as MongoDB Experience in distributed system architecture design Experience with cloud environments (Azure Preferred) Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred) Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers Experience working with Databricks preferred Experience working with agile methodology preferred Healthcare industry experience preferred Key competency profile Spot new opportunities by anticipating change and planning accordingly Find ways to better serve customers and patients. Be accountable for customer service of highest quality Create connections across teams by valuing differences and including others Own your developmentby implementing and sharing your learnings Motivate each other to perform at our highest level Help people improve by learning from successes and failures Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others.
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Hyderabad
Work from Office
What is Blend Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role? We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Spark, and hands-on expertise in AWS, Databricks . In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. Youll work closely with cross-functional teams to drive data reliability, quality, and performance. What youll be doing? Design, develop, and optimize scalable data pipelines using Databricks in AWS such as Glue, S3, Lambda, EMR, Databricks notebooks, workflows and jobs. Building data lake in WS Databricks. Build and maintain robust ETL/ELT workflows using Python and SQL to handle structured and semi-structured data. Develop distributed data processing solutions using Apache Spark or PySpark. Partner with data scientists and analysts to provide high-quality, accessible, and well-structured data. Ensure data quality, governance, security, and compliance across pipelines and data stores. Monitor, troubleshoot, and improve the performance of data systems and pipelines. Participate in code reviews and help establish engineering best practices. Mentor junior data engineers and support their technical development. What do we need from you? Bachelor's or master's degree in computer science, Engineering, or a related field. 5+ years of hands-on experience in data engineering, with at least 2 years working with AWS Databricks. Strong programming skills in Python for data processing and automation. Advanced proficiency in SQL for querying and transforming large datasets. Deep experience with Apache Spark/PySpark in a distributed computing environment. Solid understanding of data modelling, warehousing, and performance optimization techniques. Proficiency with AWS services such as Glue, S3, Lambda and EMR. Experience with version control Git or Code commit Experience in any workflow orchestration like Airflow, AWS Step functions is a plus What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications .
Posted 2 months ago
12 - 15 years
25 - 40 Lacs
Bengaluru
Work from Office
Required skillset - Data Modeling (Conceptual, Logical, Physical)- Minimum 5 years Database Technologies (SQL Server, Oracle, PostgreSQL, NoSQL)- Minimum 5 years Cloud Platforms (AWS, Azure, GCP) - Minimum 3 Years ETL Tools (Informatica, Talend, Apache Nifi) - Minimum 3 Years Big Data Technologies (Hadoop, Spark, Kafka) - Minimum 5 Years Data Governance & Compliance (GDPR, HIPAA) - Minimum 3 years Master Data Management (MDM) - Minimum 3 years Data Warehousing (Snowflake, Redshift, BigQuery)- Minimum 3 years API Integration & Data Pipelines - Good to have. Performance Tuning & Optimization - Minimum 3 years business Intelligence (Power BI, Tableau)- Minimum 3 years Job Description: Detailed JD: We are seeking experienced Data Architects to design and implement enterprise data solutions, ensuring data governance, quality, and advanced analytics capabilities. The ideal candidate will have expertise in defining data policies, managing metadata, and leading data migrations from legacy systems to Microsoft Fabric/DataBricks/Snowflake . Experience and deep knowledge about at least one of these 3 platforms is critical. Additionally, they will play a key role in identifying use cases for advanced analytics and developing machine learning models to drive business insights. Key Responsibilities: 1. Data Governance & Management Establish and maintain a Data Usage Hierarchy to ensure structured data access. Define data policies, standards, and governance frameworks to ensure consistency and compliance. Implement Data Quality Management practices to improve accuracy, completeness, and reliability. Oversee Metadata and Master Data Management (MDM) to enable seamless data integration across platforms. 2. Data Architecture & Migration Lead the migration of data systems from legacy infrastructure to Microsoft Fabric. Design scalable, high-performance data architectures that support business intelligence and analytics. Collaborate with IT and engineering teams to ensure efficient data pipeline development. 3. Advanced Analytics & Machine Learning Identify and define use cases for advanced analytics that align with business objectives. Design and develop machine learning models to drive data-driven decision-making. Work with data scientists to operationalize ML models and ensure real-world applicability. Required Qualifications: Proven experience as a Data Architect or similar role in data management and analytics. Strong knowledge of data governance frameworks, data quality management, and metadata management. Hands-on experience with Microsoft Fabric and data migration from legacy systems. Expertise in advanced analytics, machine learning models, and AI-driven insights. Familiarity with data modelling, ETL processes, and cloud-based data solutions (Azure, AWS, or GCP). Strong communication skills with the ability to translate complex data concepts into business insights. Role & responsibilities Preferred candidate profile
Posted 2 months ago
3 - 6 years
4 - 6 Lacs
Pune
Work from Office
Job Description: We are seeking a highly motivated and skilled Data Engineer with a strong background in building and maintaining data pipelines. The ideal candidate will have experience working with Python , ETL processes , SQL , and Databricks to support our clients data infrastructure and analytics needs. Key Responsibilities: Design, build, and maintain scalable and efficient ETL pipelines Develop robust data workflows using Databricks and cloud-based data platforms Write complex SQL queries for data extraction, transformation, and reporting Work with structured and semi-structured data, ensuring data quality and integrity Collaborate with data analysts, scientists, and other engineering teams to deliver high-quality data solutions Optimize data processes for performance, reliability, and scalability Required Skills: Minimum 3 years of experience in Data Engineering Proficiency in Python for data processing and automation Strong hands-on experience in ETL development Advanced SQL skills for querying and managing relational databases Experience working with Databricks (Spark/PySpark) for big data processing Familiarity with version control tools like Git Experience with cloud platforms (e.g., AWS , Azure , or GCP ) is a plus Preferred Skills: Knowledge of data modeling , data warehousing , and performance tuning Exposure to tools like Airflow , Kafka , or other orchestration frameworks Understanding of data governance and security best practices
Posted 2 months ago
5 - 10 years
6 - 9 Lacs
Ahmedabad
Work from Office
Job Title: Databricks Developer ( Pharma Domain ) Location: Mumbai / Pune / Bangalore / Chennai / Ahmedabad / Noida ( Chennai Most Preferred ) Experience: 5-10 Years Shift Timings : Day shift IST, from 12pm Work Mode: Contract ( Fixed Term Contract ) Duration: 6 Months Job Summary: We are looking for an experienced Databricks SQL Engineer with a Pharma or Life Sciences background to join our offshore Data Engineering team. This role focuses on building efficient, scalable SQL-based data models and pipelines using Databricks SQL, Spark SQL, and Delta Lake . The ideal candidate will play a key role in transforming raw data into valuable analytical insights, enabling critical decision-making across pharma-related business functions. Key Responsibilities : Design and optimize SQL queries and data models in Databricks for large-scale datasets Develop and maintain robust ETL/ELT pipelines using Databricks workflows Implement Delta Lake and Unity Catalog for secure and governed data assets Ensure data quality via validation, testing, and monitoring mechanisms Optimize performance and cost for the data lakehouse environment Collaborate with stakeholders to support analytics and business needs Deploy notebooks and SQL workflows using CI/CD best practices Document pipelines, queries, and data models to foster self-service
Posted 2 months ago
7 - 11 years
15 - 19 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 months ago
4 - 7 years
16 - 20 Lacs
Hyderabad
Work from Office
About the Role Amgen is seeking a dedicated and skilled Finance Manager to lead our FIT Reporting + Analytics (FITRA) team in India. As the sole FITRA team manager on the ground in Amgen India, you will play a key role in ensuring the successful delivery of essential financial reporting and analytics deliverables while contributing to strategic corporate initiatives. Primary Responsibilities : Manage daily reporting refresh operations, including resolving outages, user security issues, and data security challenges. Conduct weekly quality control checks on dashboards to ensure data integrity and proper functionality. Develop Tableau dashboards, including creating new ones and enhancing/fixing existing dashboards to meet the needs of our 1,000+ strong user base and senior leaders. Collaborate with the US-based FITRA team to explore and potentially transition from Tableau to Power BI. Support as needed data, reporting, and technology projects aligned with strategic corporate goals. Team Management : Supervise a team of two staff members (one associate and one senior associate). Ensure timely and quality-controlled delivery of work. Collaboration : Partner with US FITRA colleagues and FIT Data + Analytics (FITDA) colleagues to achieve shared objectives. Report directly to the hiring senior manager based in Thousand Oaks, California. Required Skills and Qualifications : Advanced proficiency in Tableau development and Power BI development. Development experience with cloud storage and ETL tools such as Databricks and Prophecy. Working knowledge of Python and SQL. Solid understanding of finance concepts, financial statements and financial data. Effective reporting design sensibility, including an acumen for the different ways to tell a story or present insights in reporting. Skill in managing large and complex datasets. Strong people management and project management skills. Clear, concise verbal and written business communication. Additional Preferred Experience : Familiarity with Oracle Hyperion, Anaplan, SAP S/4 Hana, Workday and JIRA. Ability to work collaboratively with teams and stakeholders outside of FIT, including cross-functionally Experience training both a team you manage and customers who use your reporting/work product Education/ Prior Employment Qualifications: Master's degree & 5 years of finance or analytics development experience Bachelor's degree and 8 years of finance or analytics development experience Diploma and 10 to 12 years of finance or analytics development experience
Posted 2 months ago
3 - 5 years
6 - 9 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you will primarily focus on analyzing scientific requirements from Global Research and translating them into efficient and effective information systems solutions. As a domain expert, the prospective BA collaborate with cross-functional teams to identify data product enhancement opportunities, perform data analysis, troubleshoot issues, and support system implementation and maintenance. Additionally, it will involve development of data product launch and user adoption strategy of Amgen Research Foundational Data Systems. Your expertise in business process analysis and technology will contribute to the successful delivery of IT solutions that drive operational efficiency and meet business objectives. This role requires expertise in biopharma scientific domains as well as informatics solution delivery. Additionally, extensive collaboration with global teams is required to ensure seamless integration and operational excellence. The ideal candidate will have a solid background in the end-to-end software development lifecycle and be a Scaled Agile practitioner, coupled with change management and transformation experience. This role demands the ability to deliver against key organizational central initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Collaborate with geographically dispersed teams, including those in the US, EU and other international locations. Partner and ensure alignment of the Amgen India DTI site leadership and follow global standards and practices. Foster a culture of collaboration, innovation, and continuous improvement. Function as a Scientific Business Analyst, providing domain expertise for Research Data and Analytics within a Scaled Agile Framework (SAFe) product team Serve as Agile team scrum master or project manager as needed Create functional analytics dashboards and fit-for-purposes applications for quantitative research, scientific analysis and business intelligence (Databricks, Spotfire, Tableau, Dash, Streamlit, RShiny) Support a suite of custom internal platforms, commercial off-the-shelf (COTS) software, and systems integrations Translate scientific and technological needs into clear, actionable requirements for development teams Develop and maintain release deliverables that clearly outlines the planned features and enhancements, timelines, and milestones Identify and manage risks associated with the systems, including technological risks, scientific validation, and user acceptance Develop documentations, communication plans and training plans for end users Ensure scientific data operations are scoped into building Research-wide Artificial Intelligence/Machine Learning capabilities Ensure operational excellence, cybersecurity and compliance. What we expect of you Master’s degree and 1 to 3 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Bachelor’s degree and 3 to 5 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Diploma and 7 to 9 years of Life Science/Biotechnology/Pharmacology/Information Systems experience. Basic Qualifications: 3+ years of experience in implementing and supporting biopharma scientific research data analytics Excellent problem-solving skills and a passion for tackling complex challenges in drug discovery with data Collaborative spirit and effective communication skills to work seamlessly in a multi-functional team An ongoing commitment to learning and staying at the forefront of AI/ML advancements. Familiarity with data analytics and scientific computing platforms such as Databricks, Dash, Streamlit, RShiny, Spotfire, Tableau and related programming languages like SQL, python, R Preferred Qualifications: Demonstrated expertise in a scientific domain area and related technology needs Understanding of semantics and FAIR (Findability, Accessibility Interoperability and Reuse) data concepts Experience with cloud (e.g. AWS) and on-premise compute infrastructure Experience with scientific and technical team collaborations, ensuring seamless coordination across teams and driving the successful delivery of technical projects Experience creating impactful slide decks and communicating data Ability to deliver features meeting research user demands using Agile methodology We understand that to successfully sustain and grow as a global enterprise and deliver for patients — we must ensure a diverse and inclusive work environment. Professional Certifications SAFe for Teams certification (preferred) SAFe Scrum Master or similar (preferred) Soft Skills: Strong transformation and change management experience. Exceptional collaboration and communication skills. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. Strong presentation and public speaking skills. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 months ago
2 - 6 years
3 - 6 Lacs
Hyderabad
Work from Office
Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The S r. Data Engineer will be responsible for the end-to-end development of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions , and be exceptionally skilled with data analysis and profiling . You will collaborate closely with stakeholders , product team members , and related I T teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql , along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. D esign and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with stakeholders to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in Product Owner / Platform Owner / Service Owner OR Bachelor’s degree with 8 to 10 years of experience in Product Owner / Platform Owner / Service Owner Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 6 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 2 months ago
4 - 6 years
10 - 14 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have proven experience creating and surfacing large unified repositories of human data, based on integrations from multiple sources and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in data management and data architecture OR Bachelor’s degree with 6 to 8 years of experience in data management and data architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferably Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 2 months ago
2 - 5 years
15 - 19 Lacs
Hyderabad
Work from Office
Senior Associate Finance - Financial Insights + Technology What you will do Amgen is seeking a dedicated and skilled Sr Associate Finance to join our FIT Reporting + Analytics (FITRA) team in India. In this role, you will directly contribute to the successful delivery of essential financial reporting and analytics deliverables while contributing to strategic corporate initiatives. Primary Responsibilities : Support daily reporting refresh operations, including resolving outages, user security issues, and data security challenges. Conduct weekly quality control checks on dashboards to ensure data integrity and proper functionality. Develop Tableau dashboards, including creating new ones and enhancing/fixing existing dashboards. Collaborate with the US-based FITRA team to explore and potentially transition from Tableau to Power BI. Support as needed data, reporting, and technology projects aligned with strategic corporate goals. What we expect of you Collaboration : Partner with US FITRA colleagues and FIT Data + Analytics (FITDA) colleagues to achieve shared objectives. Report directly to the FITRA Finance Manager at Amgen India. Required Skills and Qualifications : Advanced proficiency in Tableau development and Power BI development. Development experience with cloud storage and ETL tools such as Databricks and Prophecy. Working knowledge of Python and SQL. Solid understanding of finance concepts, financial statements and financial data. Skill in managing large and complex datasets. Clear, concise verbal and written business communication. Additional Preferred Experience : Familiarity with Oracle Hyperion, Anaplan, SAP S/4 Hana, Workday and JIRA. Ability to work collaboratively with teams and stakeholders outside of FIT, including cross-functionally Experience training customers on how to use your reporting/work product. Education/ Prior Employment Qualifications: Master's degree & 2 years of finance or analytics development experience Bachelor's degree & 5 years of finance or analytics development experience Diploma and 7 to 9 years of finance or analytics development experience What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
2 - 6 years
11 - 15 Lacs
Hyderabad
Work from Office
Amgen’s Precision Medicine technology te am is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These data include multiomics data (genomics, transcriptomics, proteomics, etc.), clinical study subject measurement and outcome data, images, and specimen inventory data . Our PMED data management , standardization, surfacing, and processing capabilities are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Solution and Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 6 to 8 years of experience in data management and data solution architecture Bachelor’s degree with 8 to 10 years of experience in in data management and data solution architecture Diploma and 10 to 12 years of experience in in data management and data solution architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France