Jobs
Interviews

478 Data Lake Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer We are looking for a skilled Data Architect/Engineer with strong expertise in AWS and data lake solutions. If you"™re passionate about building scalable data platforms, this role is for you. Your responsibilities will include: Architect & Design Build scalable and efficient data solutions using AWS services like Glue, Redshift, S3, Kinesis (Apache Kafka), DynamoDB, Lambda, Glue Streaming ETL, and EMR. Real-Time Data Integration Integrate real-time data from multiple Siemens orgs into our central data lake. Data Lake Management Design and manage large-scale data lakes using S3, Glue, and Lake Formation. Data Transformation Apply transformations to ensure high-quality, analysis-ready data. Snowflake Integration Build and manage pipelines for Snowflake, using Iceberg tables for best performance and flexibility. Performance Tuning Optimize pipelines for speed, scalability, and cost-effectiveness. Security & Compliance Ensure all data solutions meet security standards and compliance guidelines. Team Collaboration Work closely with data engineers, scientists, and app developers to deliver full-stack data solutions. Monitoring & Troubleshooting Set up monitoring tools and quickly resolve pipeline issues when needed. You"™d describe yourself as: Experience 3+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Solid understanding of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers

Posted 2 months ago

Apply

4 - 9 years

14 - 18 Lacs

Noida

Work from Office

Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.

Posted 2 months ago

Apply

3 - 5 years

11 - 15 Lacs

Hyderabad

Work from Office

Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 months ago

Apply

4 - 8 years

10 - 18 Lacs

Kochi, Chennai, Bengaluru

Hybrid

Data warehouse developer Experience: 3-8 years Location Chennai/Kochi/Bangalore Responsibilities: Design, build, and maintain scalable and robust data engineering pipelines using Microsoft Azure technologies such as SQL Azure, Azure Data Factory, and Azure Databricks. Develop and optimize data solutions using Azure SQL, PySpark, and PySQL to handle complex data transformation and processing tasks. Implement and manage data storage solutions in One Lake and Azure SQL, ensuring data integrity and accessibility. Work closely with stakeholders to design and build effective reporting and analytics solutions using Power BI and other analytical tools. Collaborate with IT and security teams to integrate solutions within Azure AD and ensure compliance with data security and privacy standards. Contribute to the architectural design of database and lakehouse structures, optimizing for performance and scalability. Utilize .NET frameworks where applicable, to enhance data processing and integration capabilities. Design and implement OLAP and data warehousing solutions, adhering to best practices in data warehouse design concepts. Perform database and query performance tuning and optimizations to ensure high performance and reliability. Stay updated with the latest technologies and trends in big data, proposing and implementing new tools and technologies to improve data systems and processes. Implement unit testing and automation strategies to ensure the reliability and performance of the full-stack application. Conduct thorough code reviews, providing constructive feedback to team members and ensuring adherence to coding standards and best practices. Collaborate with QA engineers to implement and maintain automated testing procedures, including API testing. Work in an Agile environment, participating in sprint planning, daily stand-ups, and retrospective meetings to ensure timely and iterative project delivery. Stay abreast of industry trends and emerging technologies to continuously improve skills and contribute innovative ideas. Requirements: Bachelors degree in computer science, Engineering, or a related field. 3-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including MS Fabric, SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Knowledge of .NET frameworks is highly preferred. Excellent problem-solving, analytical, and communication skills. Bachelors or Masters degree in Computer Science, Engineering, or a related field. Interested candidates can share their resumes at megha.chattopadhyay@aspiresys.com

Posted 2 months ago

Apply

2 - 7 years

9 - 13 Lacs

Kochi

Work from Office

We are looking for a highly skilled and experienced Azure Data Engineer with 2 to 7 years of experience to join our team. The ideal candidate should have expertise in Azure Synapse Analytics, PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL, and other relevant technologies. ### Roles and Responsibilities Design, develop, and implement data pipelines using Azure Data Factory or Azure Synapse Analytics. Develop and maintain data warehouses or data lakes using various tools and technologies. Work with various types of data sources including flat files, JSON, and databases. Build workflows and pipelines in Azure Synapse Analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing data validation and testing procedures. ### Job Requirements Hands-on experience in Azure Data Factory or Azure Synapse Analytics. Experience in data warehouse or data lake development. Strong knowledge of Spark, Python, and DWH concepts. Ability to build workflows and pipelines in Azure Synapse Analytics. Fair knowledge of Microsoft Fabric & One Lake, SSIS, ADO, and other relevant technologies. Strong analytical, interpersonal, and collaboration skills. Must Have: Azure Synapse Analytics with PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL. Good to have: Azure data bricks, Microsoft Fabric & One Lake, SSIS, ADO.

Posted 2 months ago

Apply

5 - 10 years

13 - 17 Lacs

Kochi

Work from Office

We are looking for a highly skilled and experienced Data Engineering Lead to join our team. The ideal candidate will have 5-10 years of experience in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Must have skills: Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, Azure Devops, Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming, and integration with business intelligence tools such as Power BI. Good to have skills: Big Data technologies (e.g., Hadoop, Spark), Data security. General Skills: Experience with Agile and DevOps methodologies and the software development lifecycle, proactive and responsible for deliverables, escalates dependencies and risks, works with most DevOps tools, limited supervision, completes assigned tasks on time and provides regular status reports, trains new team members, and builds strong relationships with project stakeholders. ### Job Requirements Minimum 5 years of experience in designing and implementing scalable data lake architecture and data pipelines. Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, and Azure Devops. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, and Spark streaming. Familiarity with big data file formats like Parquet and Avro. Ability to work with multi-cultural global teams and virtually. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Proactive and responsible for deliverables. Escalates dependencies and risks. Works with most DevOps tools, limited supervision. Completes assigned tasks on time and provides regular status reports. Trains new team members and builds strong relationships with project stakeholders.

Posted 2 months ago

Apply

8 - 10 years

13 - 17 Lacs

Kochi

Work from Office

We are looking for a skilled Data Engineering Lead with 8 to 10 years of experience, based in Bengaluru. The ideal candidate will have a strong background in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, Spark streaming. Experience in integrating with business intelligence tools such as Power BI. ### Job Requirements Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, and Azure DataBricks. Proficiency in Python (PySpark, Numpy), SQL, ETL, and data warehousing. Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables; escalates dependencies and risks. Works with most DevOps tools, limited supervision, and completes assigned tasks on time with regular status reporting. Ability to train new team members and build strong relationships with project stakeholders. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Ability to work with multi-cultural global teams virtually. Completion of assigned tasks on time and regular status reporting.

Posted 2 months ago

Apply

2 - 5 years

10 - 14 Lacs

Kochi

Work from Office

We are looking for a skilled ETL Developer with 2 to 5 years of experience to join our team in Bengaluru. The ideal candidate will have hands-on experience in developing data integration routines using Azure Data Factory, Azure Data Bricks, Scala/PySpark Notebooks, Azure PaaS SQL & Azure BLOB Storage. ### Roles and Responsibility Convert business and technical requirements into appropriate technical solutions and implement features using Azure Data Factory, Databricks, and Azure Data Lake Store. Implement data integration features using Azure Data Factory, Azure Data Bricks, and Scala/PySpark Notebooks. Set up and maintain Azure PaaS SQL databases and database objects, including Azure BLOB Storage. Create complex queries, including dynamic queries, for data ingestion. Own project tasks and ensure timely completion. Maintain effective communication within the team, with peers, leadership teams, and other IT groups. ### Job Requirements Bachelor's degree in Computer Science or equivalent. Minimum 2-5 years of experience as a software developer. Hands-on experience in developing data integration routines using Azure Data Factory, Azure Data Bricks, Scala/PySpark Notebooks, Azure PaaS SQL, and Azure BLOB Storage. Experience/knowledge in Azure Data Lake and related services. Ability to take accountability for quality technical deliverables to agreed schedules and estimates. Strong verbal and written communication skills. Must be an outstanding team player. Ability to manage and prioritize workload. Quick learner with a 'can-do' attitude. Flexible and able to quickly adapt to change.

Posted 2 months ago

Apply

10 - 15 years

20 - 25 Lacs

Kolkata

Work from Office

We are looking for a skilled Solution Architect with 10 to 15 years of experience to join our team in Bengaluru. The role involves designing and implementing scalable, reliable, and high-performing data architecture solutions. ### Roles and Responsibility Design and develop data architecture solutions that meet business requirements. Collaborate with stakeholders to identify needs and translate them into technical data solutions. Provide technical leadership and support to software development teams. Define and implement data management policies, procedures, and standards. Ensure data quality and integrity through data cleansing and validation. Develop and implement data security and privacy policies, ensuring compliance with regulations like GDPR and HIPAA. Design and implement data migration plans from legacy systems to the cloud. Build data pipelines and workflows using Azure services such as Azure Data Factory, Azure Databricks, and Azure Stream Analytics. Develop and maintain data models and database schemas aligned with business requirements. Evaluate and select appropriate data storage technologies including Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. Troubleshoot data-related issues and provide technical support to data users. Stay updated on the latest trends and developments in data architecture and recommend improvements. Coordinate and interact with multiple teams for smooth operations. ### Job Requirements Proven experience as a Technical/Data Architect with over 10 years of product/solutions development experience. Hands-on experience with software/product architecture, design, development, testing, and implementation. Excellent communication skills, problem-solving aptitude, organizational, and leadership skills. Experience with Agile development methodology and strategic development/deployment methodologies. Understanding of source control (Git/VSTS), continuous integration/continuous deployment, and information security. Hands-on experience with Cloud-based (Azure) product/platform development and implementation. Good experience in designing and working with Data Lakes, Data Warehouses, and ETL tools (Azure based). Expertise in Azure Data Analytics with a thorough understanding of Azure Data Platform tools. Hands-on experience and good understanding of Azure services like Data Factory, Data Bricks, Synapse, Data Lake Gen2, Stream Analytics, Azure Spark, Azure ML, SQL Server DB, Cosmos DB. Hands-on experience in Information management and Business Intelligence projects, handling huge client data sets with functions including transfer, ingestion, processing, analyzing, and visualization. Excellent communication and problem-solving skills, and the ability to work effectively in a team environment.

Posted 2 months ago

Apply

3 - 7 years

9 - 14 Lacs

Kochi

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our team, with 3-7 years of experience in modern data ecosystems. The ideal candidate will have hands-on proficiency in Informatica CDI, Azure Data Factory (ADF), Azure Data Lake (ADLS), and Databricks. ### Roles and Responsibility Provide daily Application Management Support for the full data stack, addressing service requests, incidents, enhancements, and changes. Lead and coordinate resolution of complex data integration and analytics issues through thorough root cause analysis. Collaborate with technical and business stakeholders to support and optimize data pipelines, models, and dashboards. Maintain detailed documentation including architecture diagrams, troubleshooting guides, and test cases. Remain flexible for shift-based work or on-call duties depending on client needs and critical business periods. Ensure seamless operation of data platforms and timely resolution of incidents. ### Job Requirements Bachelor’s degree in Computer Science, Engineering, Data Analytics, or related field, or equivalent work experience. Strong understanding of data governance, performance tuning, and cloud-based data architecture best practices. Excellent stakeholder collaboration skills to translate business needs into scalable technical solutions. Solid understanding of data pipeline management and optimization techniques. Experience integrating data from various sources including ERP, CRM, POS, and third-party APIs. Familiarity with DevOps/CI-CD pipelines in a data engineering context. Certifications such as Informatica Certified Developer, Microsoft Certified: Azure Data Engineer Associate, and Databricks Certified Data Engineer are preferred. Passionate, proactive problem solvers with a strong client orientation. Professionals eager to learn and grow in a fast-paced, global delivery environment. A chance to work alongside a world-class, multidisciplinary team delivering data excellence to global businesses.

Posted 2 months ago

Apply

5 - 7 years

9 - 14 Lacs

Noida

Work from Office

Reports to : Program Manager- Analytics BI Position summary: A Specialist shall work with the development team and responsible for development task as individual contribution. He/she should be able to mentor team and able to help in resolving issues. He/she should be technical sound and able to communicate with client perfectly. Key duties & responsibilities Work as Lead Developer Data engineering project for E2E Analytics. Ensure Project delivery on time. Mentor other team mates and guide them. Will take the requirement from client and communicate as well. Ensure Timely documents creation for knowledge base, user guides, and other various communications systems. Ensures delivery against Business needs, team goals and objectives, i.e., meeting commitments and coordinating overall schedule. Works with large datasets in various formats, integrity/QA checks, and reconciliation for accounting systems. Leads efforts to troubleshoot and solve process or system related issues. Understand, support, enforce and comply with company policies, procedures and Standards of Business Ethics and Conduct. Experience working with Agile methodology Experience, Skills and Knowledge: Bachelors degree in Computer Science or equivalent experience is required. B.Tech/MCA preferable. Minimum 5 -7 years experience. Excellent communications and strong commitment for delivering the highest level of service Technical Skills Expert knowledge and experience working with Spark, Scala Experience in Azure data Factory ,Azure Data bricks, data Lake Experience working with SQL and Snowflake Experience with data integration tools such as SSIS, ADF Experience with programming languages such as Python Expert in Astronomer Airflow. Experience with programming languages such as Python, Spark, Scala Experience or exposure on Microsoft Azure Data Fundamentals Key competency profile: Own youre a development by implementing and sharing your learnings Motivate each other to perform at our highest level Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Communicate effectively if there any challenge. Accountability and Responsibility should be there.

Posted 2 months ago

Apply

3 - 5 years

5 - 9 Lacs

Bengaluru

Work from Office

The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovate with new approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practices such as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills. Whats in it For You? Join us to inform the way forward with the latest AI solutions and address real-world challenges in legal, tax, compliance, and news. Backed by our commitment to continuous learning and market-leading benefits, youll be prepared to grow, lead, and thrive in an AI-enabled future. This includes: Industry-Leading Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, and hybrid model, empowering employees to achieve a better work-life balance. Career Development and Growth: ?By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Culture: Globally recognized and award-winning reputation for inclusion, innovation, and customer-focus. Our eleven business resource groups nurture our culture of belonging across the diverse backgrounds and experiences represented across our global footprint. Hybrid Work Model: Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives.

Posted 2 months ago

Apply

10 - 15 years

37 - 45 Lacs

Bengaluru

Work from Office

The Thomson Reuters Financial transformation team instrumenting on implementing and delivering solution relating to digital finance strategies, enterprise performance strategies and technologies solutions. This position will play a key role as part of Performance Management projects, including tech driven transformation with tools like OneStream. About the Role: In this opportunity as EPM Architect (OneStream), you will: 10 -15 years working experience with Enterprise Performance Management Solutions implementation and delivery. Hands on experience in EPM tools: OneStream, Hyperion Experience of involvement in end-to-end implementation of OneStream platform with significant exposure to managing OneStream infrastructure. Design and architect optimal and scalable solutions. Responsible for managing OS Infrastructure (Environment Management, Application Performance) Work with internal team to ensure OS compliance with TR Security Standards (VPN connection, Encryption standards, Security Dashboards etc.) Ensure Application governance across OS environments like code management, artifact management etc. Drive automation initiatives related to above mentioned areas. Experience of data integration methodologies for connecting OneStream platform with other systems like Data Lake, SQL Server, S4 Hana, PowerBI etc. Must demonstrate exceptional analytical skills, and a passion for the insights that result from those analyses, together with a strong understanding of the data and collection processes needed to fuel that analysis. Must have a passion for serving others, work well in a team, be self-motivated, and be a problem-solver. Must have hands on experience of planning, forecasting and month end processes. Good to have Gen AI, Sensible ML knowledge. Power BI and other reporting experience. About you: You're a fit for the role of EPM Architect (OneStream) if your background includes: Leading Financial Planning and Performance Management projects, including tech driven transformation with tools like OneStream, Oracle EPM Lead solution design and development team. Lead ongoing management and optimization of OneStream platforms infrastructure with evolving business requirements. Will work with core OneStream project team during implementation of various processes on the platform Will provide technical knowledge and expertise in the areas of Security, System Integration and application performance management. Should lead the admin activities for OneStream upgradation patches hotfixes.

Posted 2 months ago

Apply

10 - 18 years

12 - 22 Lacs

Pune, Bengaluru

Hybrid

Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com

Posted 2 months ago

Apply

8 - 13 years

25 - 30 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Strong programming skills in Python or R, with good knowledge in data manipulation, analysis, and visualization libraries (pandas, numpy, matplotlib, seaborn) Knowledge of machine learning techniques algorithms. FMCG industry will be preferable. Required Candidate profile Hands-on knowledge on machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch) Proficiency in SQL for data extraction, integration, and manipulation to analyze large datasets. Perks and benefits To be disclosed post interview

Posted 2 months ago

Apply

2 - 6 years

3 - 6 Lacs

Hyderabad

Work from Office

Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The S r. Data Engineer will be responsible for the end-to-end development of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions , and be exceptionally skilled with data analysis and profiling . You will collaborate closely with stakeholders , product team members , and related I T teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql , along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. D esign and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with stakeholders to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in Product Owner / Platform Owner / Service Owner OR Bachelor’s degree with 8 to 10 years of experience in Product Owner / Platform Owner / Service Owner Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 6 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 2 months ago

Apply

4 - 6 years

10 - 14 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have proven experience creating and surfacing large unified repositories of human data, based on integrations from multiple sources and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in data management and data architecture OR Bachelor’s degree with 6 to 8 years of experience in data management and data architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferably Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 2 months ago

Apply

2 - 6 years

11 - 15 Lacs

Hyderabad

Work from Office

Amgen’s Precision Medicine technology te am is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These data include multiomics data (genomics, transcriptomics, proteomics, etc.), clinical study subject measurement and outcome data, images, and specimen inventory data . Our PMED data management , standardization, surfacing, and processing capabilities are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Solution and Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 6 to 8 years of experience in data management and data solution architecture Bachelor’s degree with 8 to 10 years of experience in in data management and data solution architecture Diploma and 10 to 12 years of experience in in data management and data solution architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 2 months ago

Apply

1 - 4 years

6 - 10 Lacs

Hyderabad

Work from Office

Associate - Next Gen Forecasting What you will do Let’s do this. Let’s change the world. In this vital role you will support an ambitious program to evolve how Amgen does forecasting, moving from batch processes (e.g., sales forecasting to COGS forecast, clinical study forecasting) to a more continuous process. The hardworking professional we seek is curious by nature, organizationally and data savvy, with a strong record of Finance transformation, partner management and accomplishments in Finance, Accounting, or Procurement. This role will help redesign existing processes to incorporate Artificial Intelligence and Machine Learning capabilities to significantly reduce time and resources needed to build forecasts. As the Next Gen Forecasting Associate at Amgen India, you will support innovation and continuous improvement in Finance’s planning, reporting and data processes with a focus on enhancing current technologies and adapting new technologies where relevant. This individual will collaborate with cross-functional teams and support business objectives. This role reports directly to the Next Gen Forecasting Manager in Hyderabad, India. Roles & Responsibilities: Priorities can often change in a fast-paced technology environment like Amgen’s, so this role includes, but is not limited to, the following: Support implementation of real-time / continuous forecasting capabilities Establish baseline analyses, define current and future state using traditional approaches and emerging digital technologies Identify which areas would benefit most from automation / AI / ML Identify additional process / governance changes to move from batch to continuous forecasting Closely partner with Business, Accounting, FP&A, Technology and other impacted functions to define and implement proposed changes Partners with Amgen Technology function to support both existing and new finance platforms Partners with local and global teams on use cases for Artificial Intelligence (AI), Machine Learning (ML) and Robotic Process Automation (RPA) Collaborate with cross-functional teams and Centers of Excellence globally to drive operational efficiency Contributes to a learning environment and enhances learning methodologies of technical tools where applicable. Serve as local financial systems and financial data subject matter expert, supporting local team with questions Supports global finance teams and business partners with centrally delivered financial reporting via tableau and other tools Supports local adoption of Anaplan for operating expense planning / tracking What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 0 to 3 years of Finance experience OR Diploma and 4 to 7 years of Finance experience Track record of supporting new finance capabilities Proficiency in data analytics and business intelligence tools. Experience with finance reporting and planning system technologies Experience with technical support of financial platforms Knowledge of financial management and accounting principles. Experience with ERP systems Resourceful individual who can “connect the dots” across matrixed organization Preferred Qualifications: Experience in pharmaceutical and/or biotechnology industry. Experience in financial planning, analysis, and reporting. Experience with global finance operations. Knowledge of advanced financial modeling techniques. Business performance management Finance transformation experience involving recent technology advancements Prior multinational capability center experience Experience with Oracle Hyperion/EPM, S4/SAP, Anaplan, Tableau/PowerBI, DataBricks, Alteryx, data lakes, data structures Soft Skills: Excellent project management abilities. Strong communication and interpersonal skills. High level of integrity and ethical standards. Problem-solving and critical thinking capabilities. Ability to influence and drive change. Adaptability to a dynamic and fast-paced environment. Strong organizational and time management skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

3 - 5 years

4 - 8 Lacs

Gurugram

Work from Office

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Sr. Associate Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 2 months ago

Apply

3 - 5 years

4 - 9 Lacs

Gurugram

Work from Office

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 2 months ago

Apply

12 - 15 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

EMPLOYMENT QUALIFICATIONS: EDUCATION: - Bachelor's degree in software/computer engineering field. - Continuous learning, as defined by the Company's learning philosophy, is required. - Certification or progress toward certification is highly preferred and encouraged. SKILLS/KNOWLEDGE/ABILITIES (SKA) REQUIRED: - Minimum 10 years of relevant IT experience in managing multiple information systems projects. - Knowledge of healthcare domain - Should have knowledge in ETL process, SQL, Database (Oracle/MS SQL).etc. Proficient in Power BI/Tableau, Google data Studio, R, SQL, Python - Strong knowledge of cloud computing and experience in Microsoft Azure - Azure ML Studio, Azure Machine Learning - Strong knowledge in SSIS - Proficient in Azure services - Azure Data Factory, Synapse, Data Lake - Experience querying, analysing, or managing data required - Experience in data cleansing, data engineering, data enrichment, data warehousing/ Business Intelligence preferred - Strong analytical, problem solving and planning skills. - Strong organizational and presentation skills. - Excellent interpersonal and communication skills. - Ability to multi-task in a fast-paced environment. - Flexibility to adapt readily to changing business needs in a fast-paced environment - Team player who is delivery-oriented and takes responsibility for the team's success. - Enthusiastic, can-do attitude with the drive to continually learn and improve. - Knowledge of Agile, SCRUM and/or Agile methodologies. - Knowledge and experience using PPM tools, MS Project, Jira, Confluence, MS Excel, PowerPoint, SharePoint, and Visio - Exceptional interpersonal and communication skills, both oral and written - Ability to mentor junior team members with less experience - Ability to build relationships and work collaboratively with diverse leaders - Ability to communicate and influence across business and technical domains, and across all levels of the organization - Dynamic public speaking capabilities in front of large groups. - Ability to communicate ideas to both technical and non-technical audiences - Comfortable with ambiguity and can handle the unexpected with flexibility - Ability to work with a collaborative approach and build trust with others Required Skills Healthcare,Etl,Sql,Power Bi

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code - for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware's. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages - proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile - Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Responsibilities: - Understand business requirements and existing system designs, security applications and guidelines, etc. - Work with various SME's in understanding business process flows, functional requirements specifications of existing system, their current challenges and constraints and future expectation. - Streamline the process of sourcing, organizing data (from a wide variety of data sources using Python, PySpark, SQL, Spark) and accelerating data for analysis. - Support the data curation process by feeding the data catalog and knowledge bases. - Create data tools for analytics and data scientist team members that assist them in building and optimizing the data products for consumption. - Work with data and analytics experts to strive for greater functionality in the data systems. - Clearly articulate data stories using data science, advanced statistical analysis, visualization tools, PowerPoint presentations, written and oral communication. - Manage technical, analytical, and business documentation on all data efforts. - Engage in hands on development and work with both onsite and offsite leads and engineers. Competencies: - 5+ years of experience in building data engineering pipelines on both onpremise and cloud platforms (Snowflake) - 5+ years of experience in developing Python based data-applications to support data ingestion, transformation, data visualizations (plotly, streamlit, flask, dask) - Strong experience coding in Python, PySpark, SQL and building automations. - Knowledge of Cybersecurity, IT infrastructure and Software concepts. - 3+ years of experience using data warehousing / data lake techniques in cloud environments. - 3+ years of developing data visualizations using Tableau, Plotly, Streamlit - Experience with ELT/ETL tools like DBT, Cribl, etc. - Experience on capturing incremental data changes, streaming data ingestion and stream processing. - Experience in processes supporting data governance, data structures, metadata management. - Solid grasp of data and analytics concepts and methodologies including data science, data engineering, and data story-telling Required Skills Python,Sql,Cloud Platform

Posted 2 months ago

Apply

3 - 7 years

8 - 11 Lacs

Gurugram

Work from Office

KDataScience (USA & INDIA) is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and implementing scalable and reliable data pipelines, data models, and data infrastructure for processing large and complex datasets. Developing and maintaining databases, data warehouses, and data lakes that store and manage the organization's data. Developing and implementing data integration and ETL (Extract, Transform, Load) processes to ensure that data flows smoothly and accurately between different systems and data sources. Ensuring data quality, consistency, and accuracy through data profiling, cleansing, and validation. Building and maintaining data processing and analytics systems that support business intelligence, machine learning, and other data-driven applications. Optimizing the performance and scalability of data systems and infrastructure to ensure that they can handle the organization's growing data needs.

Posted 2 months ago

Apply

4 - 9 years

6 - 11 Lacs

Mumbai

Work from Office

Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accenture's approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes we're the ideal partner to help them create and realize their vision for the future – beyond their expectations. You are: A builder at heart – curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelor's degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Master's degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | Accenture Qualifications Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelor’s degree or equivalent experience in Data Engineering, analytics, or similar field

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies