Jobs
Interviews

211 Data Lakes Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

12 - 18 Lacs

Noida, Pune, Bengaluru

Work from Office

Lead the technical discovery process, assess customer requirements, and design scalable solutions leveraging a comprehensive suite of Data & AI services, including BigQuery, Dataflow, Vertex AI, Generative AI solutions, and advanced AI/ML services like Vertex AI, Gemini, and Agent Builder. Architect and demonstrate solutions leveraging generative AI, large language models (LLMs), AI agents, and agentic AI patterns to automate workflows, enhance decision-making, and create intelligent applications. Develop and deliver compelling product demonstrations, proofs-of-concept (POCs), and technical workshops that showcase the value and capabilities of Google Cloud. Strong understanding of data warehousing, data lakes, streaming analytics, and machine learning pipelines. Collaborate with sales to build strong client relationships, articulate the business value of Google Cloud solutions, and drive adoption. Lead and contribute technical content and architectural designs for RFI/RFP responses and technical proposals leveraging Google Cloud Services. Stay informed of industry trends, competitive offerings, and new Google Cloud product releases, particularly in the infrastructure and data/AI domains. Extensive experience in architecting & designing solutions on Google Cloud Platform, with a strong focus on: Data & AI services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI (ML Ops, custom models, pre-trained APIs), Generative AI (e.g., Gemini). Strong understanding of cloud architecture patterns, DevOps practices, and modern software development methodologies. Ability to work effectively in a cross-functional team environment with sales, product, and engineering teams. 5+ years of experience in pre-sales or solutions architecture, focused on cloud Data & AI platforms. Skilled in client engagements, technical presentations, and proposal development. Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Location-Noida,Pune,Bengaluru,Hyderabad,Chennai

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant-Data Engineer, AWS+Python , Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available , and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master&rsquos Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

6.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job description Department - Digital, Data & IT Novo Nordisk India Pvt Ltd Are you passionate about delivering cutting-edge digital and data-driven technology solutions Do you thrive at the intersection of technology and business, and have a knack for leading complex IT projects If so, we have an exciting opportunity for you! Join Novo Nordisk as a Delivery Lead in our Digital, Data & IT (DDIT) team in Bangalore, India, and help us shape the future of healthcare. Read on and apply today for a life-changing career. The position As a Delivery Lead - Digital, Data & IT, you will: Lead the full lifecycle of IT projects, from initiation and planning to execution, deployment, and post-go-live support. Define and manage project scope, timelines, budgets, and resources using Agile or hybrid methodologies. While Agile is preferred. Drive sprint planning, backlog grooming, and release management in collaboration with product owners and scrum teams. Conduct architecture and solution design reviews to ensure scalability and alignment with enterprise standards. Provide hands-on guidance on solution design, data modelling, API integration, and system interoperability. Ensure compliance with IT security policies and data privacy regulations, including GDPR and local requirements. Act as the primary point of contact for business stakeholders, translating business needs into technical deliverables. Facilitate workshops and design sessions with cross-functional teams, including marketing, sales, medical, and analytics. Manage vendor relationships, ensure contract compliance, SLA adherence, and performance reviews. Qualifications We are looking for an experienced professional who meets the following criteria: Bachelor's degree in computer science, Information Technology, or related field OR and MBA/postgraduate with minimum 3 years of relevant experience. 6-8 years of experience in IT project delivery, with at least 3 years in a technical leadership or delivery management role. Proven experience in CRM platforms (e.g., Veeva, Salesforce), Omnichannel orchestration tools, and Patient Engagement platforms. Proven experience is required in commercial space of business. Experience in Data lakes and analytics platforms (e.g., Azure Synapse, Power BI) Mobile/web applications for field force enablement. Certifications in project management (PMP, PRINCE2) or Agile (Scrum Master, SAFe) are good to have. Relevant experience in managing projects can also be considered. Experience with IT governance models and technical documentation for best practices. Exposure to data privacy tools and frameworks. - Familiarity with data and IT security best practices. About the department The DDIT department is located at our headquarters, where we manage projects and programs related to business requirements and specialized technical areas. Our team is dedicated to planning, organizing, and controlling resources to achieve project objectives. We foster a dynamic and innovative atmosphere, driving the adoption of Agile processes and best practices across the organization. Working at Novo Nordisk Novo Nordisk is a leading global healthcare company with a 100-year legacy of driving change to defeat serious chronic diseases. Building on our strong legacy within diabetes, we are growing massively and expanding our commitment, reaching millions around the world and impacting more than 40 million patient lives daily. All of this has made us one of the 20 most valuable companies in the world by market cap. Our success relies on the joint potential and collaboration of our more than 72,000 employees around the world. We recognise the importance of the unique skills and perspectives our people bring to the table, and we work continuously to bring out the best in them. Working at Novo Nordisk, we're working toward something bigger than ourselves, and it's a collective effort. Join us! Together, we go further. Together, we're life changing. Contact To submit your application, please upload your CV and motivational letter online (click on Apply and follow the instructions). Internal candidates are kindly requested to inform their line Managers before applying. Deadline 08th July 2025 Disclaimer It has been brought to our attention that there have recently been instances of fraudulent job offers, purporting to be from Novo Nordisk and/or its affiliate companies. The individuals or organizations sending these false employment offers may pose as a Novo Nordisk recruiter or representative and request personal information, purchasing of equipment or funds to further the recruitment process or offer paid trainings. Be advised that Novo Nordisk does not extend unsolicited employment offers. Furthermore, Novo Nordisk does not charge prospective employees with fees or make requests for funding as a part of the recruitment process. We commit to an inclusive recruitment process and equality of opportunity for all our job applicants. At Novo Nordisk we recognize that it is no longer good enough to aspire to be the best company in the world. We need to aspire to be the best company for the world and we know that this is only possible with talented employees with diverse perspectives, backgrounds and cultures. We are therefore committed to creating an inclusive culture that celebrates the diversity of our employees, the patients we serve and communities we operate in. Together, we're life changing.

Posted 2 months ago

Apply

3.0 - 6.0 years

40 - 45 Lacs

Kochi, Kolkata, Bhubaneswar

Work from Office

We are seeking experienced Data Engineers with over 3 years of experience to join our team at Intuit, through Cognizant. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 3 years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.

Posted 2 months ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted 2 months ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities Design and develop visual performance dashboards for Corporate Services and Finance functions to monitor key service delivery metrics. Apply tools such as Tableau, Power BI, and Smartsheet to create effective reporting solutions, ensuring data accuracy and integrity. Complete and implement automation solutions to enhance efficiency and reduce manual effort, bringing to bear tools such as Power Automate, Power Apps, Power Query, Tableau, Smartsheet, and SharePoint. Build and maintain data pipelines, queries, and reports to support strategic decision-making, business operations, and ad hoc analytics requests. Collect, aggregate, and analyze data from multiple systems and data warehouses (e.g., Cvent, Concur, SAP) to provide actionable insights and drive improvements in execution. Support AI automation initiatives, including the maintenance of intake and AI self-service platforms like ServiceNow, while finding opportunities for AI-driven process enhancements. Ensure seamless data integration and system configurations in collaboration with Technology teams, enforcing data governance policies and standardized data connectivity. Proactively identify trends, conduct investigations, and provide data-driven recommendations to functional leaders to improve business performance and operational efficiency. Prepare recurring reports and dashboards, including monthly, quarterly, and annual performance measurements for Corporate Services leadership. Develop and optimize data analytic queries, standardized/custom report layouts, and a library of executive report formats to align reporting processes with business objectives. Apply data science methodologies, including regression, classification, clustering, and predictive modeling, to enhance reporting and analytics capabilities. Conduct in-depth, ad hoc analyses to investigate operational challenges and provide data-driven insights What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is a with these qualifications. Basic Qualifications: Master s degree and 1 to 3 years in Data Analytics, Computer Science, or a related field & 2 years data analysis, automation or business intelligence experience OR Bachelor s degree and 3 to 5 years in Data Analytics, Computer Science, or a related field & 4 years data analysis, automation or business intelligence experience OR Diploma and 7 to 9 years in Data Analytics, Computer Science, or a related field & 5 years data analysis, automation or business intelligence experience Preferred Qualifications: Experience with data analytics, reporting tools, and automation solutions. Solid skills in data visualization and dashboard creation (e.g., Power BI, Tableau). Proficiency in SQL and NoSQL databases, including relational table design, indexing strategies, and writing complex queries, with experience handling big data models, data lakes, and distributed computing frameworks. Ability to work with large datasets and extract meaningful insights. Proficiency in data analytics and visualization tools Expertise in automation platforms and workflows, including Microsoft Power Platform (Power Automate, Power Query, Power Apps, SharePoint, and Pages) to streamline processes and improve efficiency. Experience in programming languages such as Python, R, and JSON for data processing, automation, and analytics. Experience with AI-driven analytics and large language models (LLMs) to enhance data insights and automation capabilities. Experience working with self-service platforms such as ServiceNow to support business functions and automation. Understanding of enterprise data governance principles to ensure data accuracy, integrity, and compliance across reporting and automation systems. Familiarity with additional automation tools, such as UiPath and emerging AI technologies, to drive process optimization. Strong data visualization and storytelling skills, with the ability to translate complex data into meaningful dashboards, executive reports, and infographics. Knowledge of statistical techniques, including regression, clustering, and classification, as well as data discovery and visualization methods such as distributions, histograms, and bar charts. Proven ability to take initiative and complete projects independently, while effectively collaborating across teams and influencing without direct authority. Strong attention to detail and ability to manage multiple tasks effectively. Strong communication skills, with the ability to present insights clearly to leadership and coordinate cross-functional data requests and updates.

Posted 2 months ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Aws Solution Architect Qualification : Any Graduate or Above Relevant Experience : 10 -15Years Required Technical Skill Set (Skill Name) : Data lakes, data warehouses, AWS Glue, Aurora with Postgres, MySQL and DynamoDB Location : Bangalore CTC Range : 25 LPA-40 LPA Notice period : Any Shift Timing : N/A Mode of Interview : Virtual Mode of Work : WFO( Work From Office) Pooja Singh KS IT Staffing Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. pooja.singh@blackwhite.in I www.blackwhite.in

Posted 2 months ago

Apply

6.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Oracle Health, we put humans at the heart of every conversation. Our mission is to create a human-centric healthcare experience powered by unified global data.As a global leader we're looking for a Data Engineer with BI to join an exciting project for replacing existing Data warehouse systems with the Oracle's own data warehouse to manage storage of all the internal corporate data to provide insights that will help our teams to make critical business decisions Join us and create the future! Career Level - IC3 Roles and Responsibilities . Proficient in writing and optimising SQL queries for data extraction . Translate client requirements to technical design that junior team members can implement . Developing the code that aligns with the technical design and coding standards . Review design and code implemented by other team members. Recommend better design and efficient code . Conduct Peer design and Code Reviews for early detection of defects and code quality . Documenting ETL processes and data flow diagrams . Optimizing data extraction and transformation processes for better performance . Performing data quality checks and debugging issues . Conducting root cause analysis for data issues and implementing fixes . Collaborating with more experienced developers on larger projects, collaborate with stakeholders on the requirements . Participate in the requirements, design and implementation discussions . Participating in learning and development opportunities to enhance technical skills . Test storage system after transferring the data . Exposures to Business Intelligence platforms like OAC, Power BI or Tableau Technical Skills Set : . You must be strong in PLSQL concepts such as tables, keys, DDL, DML commands, etc. . You need be proficient in writing, debugging complex SQL queries, Views and Stored Procedures. . Strong hands on in Python / PySpark programming . As a Data Engineer, you must be strong in data modelling, ETL / ELT concepts, programming / scripting Python language, . You must be proficient in the following ETL process automation tools ??????. Oracle Data Integrator (ODI) ??????. Oracle Data Flow ??????. Oracle Database / Autonomous Data warehouse . Should possess working knowledge in any of the cloud platform like Oracle Cloud (Preferred), Microsoft Azure, AWS . You must be able to create technical design, build prototypes, build and maintain high performing data pipelines, optimise ETL pipelines . Good knowledge on Business Intelligence development tools like OAC, PowerBI . Good to have Microsoft ADF and Data Lakes, Databricks Career Level - IC3

Posted 2 months ago

Apply

3.0 - 5.0 years

20 - 22 Lacs

Udaipur

Work from Office

3-5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing

Posted 2 months ago

Apply

0.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group's partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India: In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Role purpose: Creating detailed data architecture documentation, including data models, data flow diagrams, and technical specifications Creating and maintaining data models for databases, data warehouses, and data lakes, defining relationships between data entities to optimize data retrieval and analysis. Designing and implementing data pipelines to integrate data from multiple sources, ensuring data consistency and quality across systems. Collaborating with business stakeholders to define the overall data strategy, aligning data needs with business requirements. Support migration of new & changed software, elaborate and perform production checks Need to effectively communicate complex data concepts to both technical and non-technical stakeholders. GCP Knowledge/exp with Cloud Composer, BigQuery, Pub/Sub, Cloud Functions. -- Strong communicator, experienced in leading & negotiating decision and effective outcomes. -- Strong overarching Data Architecture knowledge and experience with ability to govern application of architecture principles within projects VOIS Equal Opportunity Employer Commitment India: VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we'll be in touch!

Posted 2 months ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Pune

Work from Office

The Data Engineer supports, develops, and maintains a data and analytics platform to efficiently process, store, and make data available to analysts and other consumers. This role collaborates with Business and IT teams to understand requirements and best leverage technologies for agile data delivery at scale. Note:- Even though the role is categorized as Remote, it will follow a hybrid work model. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Develop and operate large-scale data storage and processing solutions using cloud-based platforms (e.g., Data Lakes, Hadoop, HBase, Cassandra, MongoDB, DynamoDB). Ensure data quality and integrity through continuous monitoring and troubleshooting. Implement data governance processes, managing metadata, access, and data retention. Develop scalable, efficient, and quality data pipelines with monitoring and alert mechanisms. Design and implement physical data models and storage architectures based on best practices. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Utilize agile development technologies such as DevOps, Scrum, and Kanban for continuous improvement in data-driven applications. External Qualifications and Competencies Qualifications, Skills, and Experience: Must-Have: 2-3 years of experience in data engineering with expertise in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Strong understanding of SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Experience in Unit Testing of ETL pipelines. Expertise in creating ETL pipelines integrating ML models. Knowledge of Big Data storage strategies (optimization and performance). Strong problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Exposure to Agile software development methodologies. Quick learner with adaptability to new technologies. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open-source technologies. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement in cloud environments. Experience in building analytical solutions. Exposure to IoT technology. Competencies: System Requirements Engineering: Translates stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively with others. Communicates Effectively: Develops and delivers clear communications for various audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes timely and informed decisions to drive progress. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Writes and tests computer code using industry standards, tools, and automation. Quality Assurance Metrics: Applies measurement science to assess solution effectiveness. Solution Documentation: Documents and communicates solutions to enable knowledge transfer. Solution Validation Testing: Ensures configuration changes meet design and customer requirements. Data Quality: Identifies and corrects data flaws to support governance and decision-making. Problem Solving: Uses systematic analysis to identify and resolve issues effectively. Values Differences: Recognizes and values diverse perspectives and cultures. Additional Responsibilities Unique to this Position Education, Licenses, and Certifications: College, university, or equivalent degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Work Schedule: Work primarily with stakeholders in the US, requiring a 2-3 hour overlap during EST hours as needed.

Posted 3 months ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. External Qualifications and Competencies Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies: System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Additional Responsibilities Unique to this Position Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis.

Posted 3 months ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Hyderabad

Work from Office

Role Description: The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgens wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgens goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The S r. Data Engineer will be responsible for the end-to-end development of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions , and be exceptionally skilled with data analysis and profiling . You will collaborate closely with stakeholders , product team members , and related I T teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities: Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with stakeholders to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience: Masters degree with 4 to 6 years of experience in Product Owner / Platform Owner / Service Owner OR Bachelors degree with 8 to 10 years of experience in Product Owner / Platform Owner / Service Owner Functional Skills: Must-Have Skills : Minimum of 3 years of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 6 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design, DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms (AWS), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role): ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft Certified: Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity Highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams, specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 3 months ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Delhi, India

On-site

Responsibilities: Data Analytics & Insight Generation (30%) Analyze marketing, digital, and campaign data to uncover patterns and deliver actionable insights. Support performance measurement, experimentation, and strategic decision-making across the marketing funnel. Translate business questions into structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) Design and maintain scalable data pipelines and workflows using SQL , Python , and Databricks . Build and evolve a marketing data lake , integrating APIs and data from multiple platforms and tools. Work across cloud environments (Azure, AWS) to support analytics-ready data at scale. Project & Delivery Ownership (25%) Serve as project lead or scrum owner across analytics initiatives planning sprints, managing delivery, and driving alignment. Use tools like JIRA to manage work in an agile environment and ensure timely execution. Collaborate with cross-functional teams to align priorities and execute on roadmap initiatives. Visualization & Platform Enablement (15%) Build high-impact dashboards and data products using Tableau , with a focus on usability, scalability, and performance. Enable stakeholder self-service through clean data architecture and visualization best practices. Experiment with emerging tools and capabilities, including GenAI for assisted analytics. Experience 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context. Hands-on experience with Responsibilities: Data Analytics & Insight Generation (30%) Analyze marketing, digital, and campaign data to uncover patterns and deliver actionable insights. Support performance measurement, experimentation, and strategic decision-making across the marketing funnel. Translate business questions into structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) Design and maintain scalable data pipelines and workflows using SQL , Python , and Databricks . Build and evolve a marketing data lake , integrating APIs and data from multiple platforms and tools. Work across cloud environments (Azure, AWS) to support analytics-ready data at scale. Project & Delivery Ownership (25%) Serve as project lead or scrum owner across analytics initiatives planning sprints, managing delivery, and driving alignment. Use tools like JIRA to manage work in an agile environment and ensure timely execution. Collaborate with cross-functional teams to align priorities and execute on roadmap initiatives. Visualization & Platform Enablement (15%) Build high-impact dashboards and data products using Tableau , with a focus on usability, scalability, and performance. Enable stakeholder self-service through clean data architecture and visualization best practices. Experiment with emerging tools and capabilities, including GenAI for assisted analytics. Experience 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context. Hands-on experience with SQL , Python , and tools such as Databricks , Azure , or AWS . Proven track record of building and managing data lakes , ETL pipelines , and API integrations . Strong proficiency in Tableau ; experience with Tableau Prep is a plus. Familiarity with Google Analytics (GA4) , GTM , and social media analytics platforms. Experience working in agile teams , with comfort using JIRA for sprint planning and delivery. Exposure to predictive analytics , modeling, and GenAI applications is a plus. Strong communication and storytelling skills able to lead high-stakes meetings and deliver clear insights to senior stakeholders. Excellent organizational and project management skills; confident in managing competing priorities. High attention to detail, ownership mindset, and a collaborative, delivery-focused approach. and tools such as Databricks , Azure , or AWS . Proven track record of building and managing data lakes , ETL pipelines , and API integrations . Strong proficiency in Tableau ; experience with Tableau Prep is a plus. Familiarity with Google Analytics (GA4) , GTM , and social media analytics platforms. Experience working in agile teams , with comfort using JIRA for sprint planning and delivery. Exposure to predictive analytics , modeling, and GenAI applications is a plus. Strong communication and storytelling skills able to lead high-stakes meetings and deliver clear insights to senior stakeholders. Excellent organizational and project management skills; confident in managing competing priorities. High attention to detail, ownership mindset, and a collaborative, delivery-focused approach.

Posted 3 months ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-HaveSkills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis Good-to-Have Skills: Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft Certified Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification.

Posted 3 months ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-HaveSkills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis

Posted 3 months ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Amgen's Clinical Computation Platform Product Team manages a core set of clinical computation solutions that support global clinical development. This team is responsible for building and maintaining systems for clinical data storage, data auditing and security management, analysis and reporting capabilities. These capabilities are pivotal in Amgen's goal to serve patients. The Principal IS Architect will define the architecture vision, create roadmaps, and support the design and implementation of advanced computational platforms to support clinical development, ensuring that IT strategies align with business goals. The Principal IS Architect will work closely with partners across departments, including CfDA, GDO, CfOR, CfTI, CPMS and IT teams, to design and implement scalable, reliable, and hard-working solutions. Key Responsibilities: Develop and maintain the enterprise architecture vision and strategy, ensuring alignment with business objectives Create and maintain architectural roadmaps that guide the evolution of IT systems and capabilities Establish and enforce architectural standards, policies, and governance frameworks Evaluate emerging technologies and assess their potential impact on the enterprise/domain/solution architecture Identify and mitigate architectural risks, ensuring that IT systems are scalable, secure, and resilient Maintain comprehensive documentation of the architecture, including principles, standards, and models Drive continuous improvement in the architecture by finding opportunities for innovation and efficiency Work with partners to gather and analyze requirements, ensuring that solutions meet both business and technical needs Evaluate and recommend technologies and tools that best fit the solution requirements Ensure seamless integration between systems and platforms, both within the organization and with external partners Design systems that can scale to meet growing business needs and performance demands Develop and maintain logical, physical, and conceptual data models to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Basic Qualifications: Master's degree with 8 to 10 years of experience in Computer Science, IT, or related field OR Bachelor's degree with 10 to 14 years of experience in Computer Science, IT, or related field OR Diploma with 14 to 18 years of experience in Computer Science, IT, or related field Proficiency in designing scalable, secure, and cost-effective solutions Expertise in cloud platforms (AWS, Azure, GCP), data lakes, and data warehouses Experience in evaluating and selecting technology vendors Ability to create and demonstrate proof-of-concept solutions to validate technical feasibility Strong knowledge of Clinical Research and Development domain Experience working in Agile methodology, including Product Teams and Product Development models Preferred Qualifications: Strong solution design and problem-solving skills Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Experience with machine learning and artificial intelligence applications in clinical research Strong programming skills in languages such as Python, R, or Java Experience with DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills

Posted 3 months ago

Apply

10.0 - 12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Category Sales Job Details About Salesforce We're Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too - driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good - you've come to the right place. Overview of the Role: We have an outstanding opportunity for an expert AI and Data Cloud Solutions Engineer to work with our trailblazing customers in crafting ground-breaking customer engagement roadmaps demonstrating the Salesforce applications, platform across the machine learning and LLM/GPT domains in India! The successful applicant will have a track record in driving business outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology groups. Responsibilities: Primary pre-sales technical authority for all aspects of AI usage within the Salesforce product portfolio - existing Einstein ML based capabilities and new (2023) generative AI Majority of time (60%+) will be customer/external facing Evangelisation of Salesforce AI capabilities Assessing customer requirements and use cases and aligning to these capabilities Solution proposals, working with Architects and wider Solution Engineer (SE) teams Preferred Qualifications: expertise in an AI related subject (ML, deep learning, NLP etc.) Familiar with technologies such as OpenAI, Google Vertex, Amazon Sagemaker, Snowflake, Databricks etc Required Qualifications: Experience will be evaluated based on the core proficiencies of the role. 4+ years working directly in the commercial technology space with AI products and solutions. Data knowledge - Data science, Data lakes and warehouses, ETL, ELT, data quality AI knowledge - application of algorithms and models to solve business problems (ML, LLMs, GPT) 10+ years working in a sales, pre-sales, consulting or related function in a commercial software company Strong focus and experience in pre-sales or implementation is required. Experience in demonstrating Customer engagement solution, understand and drive use cases, customer journeys, ability to draw Day in life of across different LOBs. Business Analysis/ Business case/return on investment construction. Demonstrable experience in presenting and communicating complex concepts to large audiences A broad understanding of and ability to articulate the benefits of CRM, Sales, Service and Marketing cloud offerings Strong verbal and written communications skills with a focus on needs analysis, positioning, business justification, and closing techniques. Continuous learning demeanor with a demonstrated history of self enablement and advancement in both technology and behavioural areas. Building reference models/ideas/approaches for inclusion of GPT based products within wider Salesforce solution architectures, especially involving Data Cloud Alignment with customer security and privacy teams on trust capabilities and values of our solution(s) Presenting at multiple customer events from single account sessions through to major strategic events (World Tour, Dreamforce) Representing Salesforce at other events (subject to PM approval) Sales and SE organisation education and enablement e.g. roadmap - all roles across all product areas Bridge/primary contact point to product management Provide thought leadership in how large enterprise organisation can drive customer success through digital transformation. Ability to uncover the challenges and issues a business is facing by running successful and targeted discovery sessions and workshops. Be an innovator who can build new solutions using out-of-the-box thinking. Demonstrate business value of our AI solutions to business using solution presentations, demonstrations and prototypes. Build roadmaps that clearly articulate how partners can implement and accept solutions to move from current to future state. Deliver functional and technical responses to RFPs/RFIs. Work as an excellent teammate by chipping in, learning and sharing new knowledge. Demonstrate a conceptual knowledge of how to integrate cloud applications to existing business applications and technology. Lead multiple customer engagements concurrently. Be self-motivated, flexible, and take initiative.| Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 3 months ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do The R&D Precision Medicine team is responsible for Data Standardization, Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with access to Amgens wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These data include clinical data, omics, and images. These solutions are pivotal tools in Amgens goal to accelerate the speed of discovery, and speed to market of advanced precision medications. The Data Engineer will be responsible for full stack development of enterprise analytics and data mastering solutions leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that support research cohort-building and advanced AI pipelines. The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions, and be exceptionally skilled with data analysis and profiling. You will collaborate closely with partners, product team members, and related IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a solid background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-Have Skills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis Good-to-Have Skills: Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity Highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams, specifically including using of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 3 months ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will support an ambitious program to evolve how Amgen does forecasting, moving from batch processes (e.g., sales forecasting to COGS forecast, clinical study forecasting) to a more continuous process. The hardworking professional we seek is curious by nature, organizationally and data savvy, with a strong record of Finance transformation, partner management and accomplishments in Finance, Accounting, or Procurement. This role will help redesign existing processes to incorporate Artificial Intelligence and Machine Learning capabilities to significantly reduce time and resources needed to build forecasts. As the Next Gen Forecasting Senior Associate at Amgen India, you will drive innovation and continuous improvement in Finances planning, reporting and data processes with a focus on maximizing current technologies and adapting new technologies where relevant. This individual will collaborate with cross-functional teams and support business objectives. This role reports directly to the Next Gen Forecasting Manager in Hyderabad, India. Roles & Responsibilities: Priorities can often change in a fast-paced technology environment like Amgens, so this role includes, but is not limited to, the following: Support implementation of real-time / continuous forecasting capabilities Establish baseline analyses, define current and future state using traditional approaches and emerging digital technologies Identify which areas would benefit most from automation / AI / ML Identify additional process / governance changes to move from batch to continuous forecasting Closely partner with Business, Accounting, FP&A, Technology and other impacted functions to define and implement proposed changes Partners with Amgen Technology function to support both existing and new finance platforms Partners with local and global teams on use cases for Artificial Intelligence (AI), Machine Learning (ML) and Robotic Process Automation (RPA) Collaborate with cross-functional teams and Centers of Excellence globally to drive operational efficiency Contributes to a learning environment and enhances learning methodologies of technical tools where applicable. Serve as local financial systems and financial data subject matter expert, supporting local team with questions Supports global finance teams and business partners with centrally delivered financial reporting via tableau and other tools Supports local adoption of Anaplan for operating expense planning / tracking What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of Finance experience OR Bachelors degree and 3 to 5 years of Finance experience OR Diploma and 7 to 9 years of Finance experience Consistent record of launching new finance capabilities Proficiency in data analytics and business intelligence tools. Experience with finance reporting and planning system technologies Experience with technical support of financial platforms Knowledge of financial management and accounting principles. Experience with ERP systems Resourceful individual who can connect the dots across matrixed organization Preferred Qualifications: Experience in pharmaceutical and/or biotechnology industry. Experience in financial planning, analysis, and reporting. Experience with global finance operations. Knowledge of advanced financial modeling techniques. Business performance management Finance transformation experience involving recent technology advancements Prior multinational capability center experience Experience with Oracle Hyperion/EPM, S4/SAP, Anaplan, Tableau/PowerBI, DataBricks, Alteryx, data lakes, data structures Soft Skills: Excellent project management abilities. Strong communication and interpersonal skills. High level of integrity and ethical standards. Problem-solving and critical thinking capabilities. Ability to influence and motivate change. Adaptability to a dynamic and fast-paced environment. Strong organizational and time management skills

Posted 3 months ago

Apply

10.0 - 12.0 years

0 Lacs

Delhi, India

On-site

Job Description: The resource should mandatorily have minimum 10 Years of experience in solution planning and system architecture designing with at least 4 years of experience as a Lead Architect for DW BI or Big Data Systems The resource should have demonstrated extensive experience in the use of various techniques to develop robust data warehouse business intelligence solutions which have significant weightage to statistical and advanced analytics Key Responsibilities: ROLE AND RESPONSIBILITIES As a Data Lake Big Data Architect lead the engagement efforts at different stages from problem definition to diagnosis to solution design development deployment in large government implementation programs Create detailed design and architecture and process artifacts implement the solution and the deployment plan Connect with senior client business and IT stakeholders demonstrating thought leadership in domain process and technology REQUIRED SKILLS AND EXPERIENCE Domain process functional technical Strong hands on and in depth knowledge in Data Lakes Big Data modules Strong understanding of Data modelling concepts Strong understanding of the Data Warehousing Business Intelligence AI solutions good understanding of airlines industry Thorough understanding of Agile methodologies Good understanding of business processes in the airlines domain or with government organizations Experience in leading and driving Business process workshops and Fit GAP analysis Should have working experience in a highly regulated environment Should be aware of release governance processes and have experience in working on any incident management tool Preferred Skills: Foundational->Development process generic->SaaS Development Process

Posted 3 months ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of S enior Manager , Sourcing and Procurement Responsibilities Project management which will include developing project plans, processes, and timeline Design, manage and run customized reports for category managers, directors, and executives Liaison with IT to ensure necessary data requirements and increase knowledgebase of how data flows, where it is stored, and how to access it in our enterprise system Assist in developing commercial evaluation criteria Supplier Segmentation support Identify & implement standardization within process Develop and own critical Master Data Management inputs. These include, but are not limited to BU Tier mapping (DRM), Sourcing Taxonomy, and Region/Country/City/Site mappings Maintain rhythms with key stakeholders/customers as a means of checking in on data quality issues and pending action items Partner and influence external stakeholders who control the input data process. This includes building strong relationships with IT in GE Businesses and at the corporate level Conduct Executive Presentations for project updates as well as extended training of spend analytics tools, often to an audience of 100+ participants Lead periodic perception alignments with Sourcing leaders, business and category teams alike, to ensure data accuracy exceeds 90% best in class benchmarks (Tier 1-3 levels) through validation of rule-based supplier classification Partner with Data Science team to ensure implementation of MDM standards across platforms (Spend, Savings, Cash, Contracts, Preferred Supplier, and others that will be implemented) Own and Manage reporting and analytics tools built on top of the Finance Data Lake, including wing to wing management of enhancement implementation Executive Dashboard - Summary level Data Visualizations on Tableau Data Extraction Tool - Finance Data Store Change of Classification Workflow - Interactive workflow tool for user input on Spend Categorization, Supplier Normalization, and Spend Exclusion Lead Project to integrate the Tableau Spend Dashboard and the raw data extraction tool (passing filters between the two tools) Remain agile to deliver on other Adhoc Fire Drill Data Requests Manage Project of developing new tools/visualization aimed at proactive analytics/forecasting models Qualifications Minimum qualifications Excellent Interpersonal Communication Skills. Excellent in Project Management & execution Analytical & Problem-Solving Skills Exposure to Spend Analytics and opportunity assessment Good Understanding of Sourcing & Procurement Domain. Good Understanding of Technology Landscape like Data Science / AI / ML / Data Engineering concepts Familiarity with big data and analytics architecture (e.g., Data Lakes) Executive Presentation Skills People Management Experience Preferred qualifications Good competency in any Indirect Sourcing Category Knowledge of systems landscape in S2P space Effective at narrating the story from the data using compelling presentations Communicate clearly and concisely, both orally and in writing Establish and maintain effective working relationships with those contacted in the course of work Experience managing data for multiple Oracle, SAP, and other ERP platforms Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at www.genpact.com and on , Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 months ago

Apply

10.0 - 17.0 years

12 - 19 Lacs

Chennai, Bengaluru

Work from Office

Job Purpose: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. Requirements: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. The ideal candidate should have: Key Responsibilities: Participate in data strategy and road map exercises, data architecture definition, business intelligence/data warehouse solution and platform selection, design and blueprinting, and implementation. Lead other team members and provide technical leadership in all phases of a project from discovery and planning through implementation and delivery. Work experience in RFP, RFQ's. Work through all stages of a data solution life cycle: analyze/profile data, create conceptual, logical & physical data model designs, architect and design ETL, reporting, and analytics solutions. Lead source to target mapping, define interface process and standards, and implement the standards Perform Root Cause Analysis and develop data remediation solutions Develop and implement proactive monitoring and alert mechanism for data issues. Collaborate with other workstream leads to ensure the overall developments are in sync Identify risks and opportunities of potential logic and data issues within the data environment Guide, influence, and mentor junior members of the team Collaborate effectively with the onsite-offshore team and ensure day to day deliverables are met Qualifications & Key skills required: Bachelor's degree and 10+ years of experience in related data and analytics area Demonstrated knowledge of modern data solutions such as Azure Data Fabric, Synapse Analytics, Lake houses, Data lakes Strong source to target mapping experience and ETL principles/knowledge Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required Excellent verbal and written communication skills. Strong quantitative and analytical skills with accuracy and attention to detail Ability to work well independently with minimal supervision and can manage multiple priorities Proven experiences with Azure, AWS, GCP, OCI and other modern technology platforms is required

Posted 3 months ago

Apply

7.0 - 9.0 years

22 - 35 Lacs

New Delhi, Gurugram, Greater Noida

Work from Office

Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. 7–9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark. Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse. Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Preferred Skills: Strong knowledge of data engineering concepts (data pipeline creation, data warehousing, data marts/cubes, data reconciliation and audit, data management). Working knowledge of DevOps processes (CI/CD), Git/Jenkins version control tools, Master Data Management (MDM), and data quality tools. Strong experience in ETL/ELT development, QA, and operations/support processes (RCA of production issues, code/data fix strategy, monitoring and maintenance). Hands-on experience with databases like Azure SQL DB, Snowflake, MySQL, Cosmos DB, Blob Storage, Python/Unix Shell scripting. ADF, Databricks, and Azure certifications are a plus. Technologies We Use: Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, scripting (PowerShell, Bash), Git, Terraform, Power BI Responsibilities: Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms. Lead the technical execution of non-domain-specific initiatives (e.g., reusable dimensions, TLOG standardization, enablement pipelines). Architect data models and reusable layers consumed by multiple downstream pods. Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks. Mentor and coach team members. Partner with product and platform leaders to ensure engineering consistency and delivery excellence. Act as an L3 escalation point for operational data issues impacting foundational pipelines. Own engineering best practices, sprint planning, and quality across the Enablement pod. Contribute to platform discussions and architectural decisions across regions.

Posted 3 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Mumbai

Work from Office

#JobOpening Data Engineer (Contract | 6 Months) Location: Hyderabad | Chennai | Remote Flexibility Possible Type: Contract | Duration: 6 Months We are seeking an experienced Data Engineer to join our team for a 6-month contract assignment. The ideal candidate will work on data warehouse development, ETL pipelines, and analytics enablement using Snowflake, Azure Data Factory (ADF), dbt, and other tools. This role requires strong hands-on experience with data integration platforms, documentation, and pipeline optimizationespecially in cloud environments such as Azure and AWS. #KeyResponsibilities Build and maintain ETL pipelines using Fivetran, dbt, and Azure Data Factory Monitor and support production ETL jobs Develop and maintain data lineage documentation for all systems Design data mapping and documentation to aid QA/UAT testing Evaluate and recommend modern data integration tools Optimize shared data workflows and batch schedules Collaborate with Data Quality Analysts to ensure accuracy and integrity of data flows Participate in performance tuning and improvement recommendations Support BI/MDM initiatives including Data Vault and Data Lakes #RequiredSkills 7+ years of experience in data engineering roles Strong command of SQL, with 5+ years of hands-on development Deep experience with Snowflake, Azure Data Factory, dbt Strong background with ETL tools (Informatica, Talend, ADF, dbt, etc.) Bachelor's in CS, Engineering, Math, or related field Experience in healthcare domain (working with PHI/PII data) Familiarity with scripting/programming (Python, Perl, Java, Linux-based environments) Excellent communication and documentation skills Experience with BI tools like Power BI, Cognos, etc. Organized, self-starter with strong time-management and critical thinking abilities #NiceToHave Experience with Data Lakes and Data Vaults QA & UAT alignment with clear development documentation Multi-cloud experience (especially Azure, AWS) #ContractDetails Role: Data Engineer Contract Duration: 6 Months Location Options: Hyderabad / Chennai (Remote flexibility available)

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies