Jobs
Interviews

11 Data Fabric Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

13.0 - 20.0 years

35 - 70 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Required Skills and Experience 13+ Years is a must with 7+ years of relevant experience working on Big Data Platform technologies. Proven experience in technical skills around Cloudera, Teradata, Databricks, MS Data Fabric, Apache, Hadoop, Big Query, AWS Big Data Solutions (EMR, Redshift, Kinesis, Qlik) Good Domain Experience in BFSI or Manufacturing area . Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams independently. Work with both internal/external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Note: If you have experience in BFSI Domain than the location will be Mumbai only If you have experience in Manufacturing Domain the location will be Mumbai & Bangalore only. Interested candidates can share their updated resumes on shradha.madali@sdnaglobal.com

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

At Moody's, we aim to unite the brightest minds to transform today's risks into tomorrow's opportunities. We strive to cultivate an inclusive environment where everyone is encouraged to express their true selves, exchange ideas freely, think innovatively, and engage with each other and customers in meaningful ways. If you are enthusiastic about this opportunity, even if you do not meet every requirement listed, we encourage you to apply. You may still be a great fit for this role or other available positions. We are looking for candidates who embody our values: investing in every relationship, leading with curiosity, championing diverse perspectives, turning ideas into actions, and upholding trust through integrity. Skills and Competencies - Experience in utilizing industry-standard data transformation, low-code automation, business intelligence solutions, and operational responsibilities with tools like Power BI, Alteryx, and Automation Anywhere. - Familiarity with Python, Data Fabric, and a working understanding of Hyperion/OneStream (EPM) would be advantageous. - Proficiency in SQL for working with both structured and unstructured data. - Strong knowledge of data structures, algorithms, and software development best practices. - Understanding of version control systems such as Git and agile methodologies. - Knowledge of cloud platforms like AWS, Azure, or Google Cloud is a plus. - Effective communication skills to articulate technical concepts to non-technical stakeholders. - Strong problem-solving abilities and the capability to work independently or collaboratively within a team. Education - Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Responsibilities - Support operational processes within the Data and Analytics team. - Contribute to the creation of process documentation (SOPs) and aid in establishing guidelines to standardize and enhance operations. - Provide assistance for Automation/Data/BI servers to ensure system performance and reliability. - Monitor and support automation, BI, and data processes to maintain seamless operations and address issues promptly. - Aid in managing security and access controls to safeguard data and uphold data integrity. - Track automation, Gen AI, data, and BI use cases across the team. - Support Gen AI application environments, security, and post-production operations. - Assist in BI incident management and efficiently route issues to the appropriate area. - Monitor and report on metrics to evaluate performance and identify areas for enhancement. - Maintain comprehensive documentation and communication to ensure transparency. - Monitor and report on metrics to ensure operational efficiency. About The Team The Automation Operations & Innovation Team is a dynamic group within the Data and Analytics department focused on enhancing operational efficiency and driving digital transformation through automation, GenAI, business intelligence, and data management. The team collaborates with developers, analysts, and process managers to design and implement scalable solutions using tools like Alteryx, Power BI, Automation Anywhere, and Python. Through collaboration, innovation, and continuous improvement, the team supports strategic initiatives by streamlining workflows, enhancing data quality, and integrating emerging technologies. Candidates applying to Moody's Corporation may be required to disclose securities holdings in accordance with Moodys Policy for Securities Trading and the position's requirements. Employment is subject to adherence to the Policy, including addressing any necessary remediation of positions in those holdings.,

Posted 2 weeks ago

Apply

9.0 - 12.0 years

9 - 12 Lacs

Hyderabad, Telangana, India

On-site

The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 9 to 12 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 10 Lacs

Hyderabad, Telangana, India

On-site

Key Deliverables: Design and deploy scalable ETL/ELT pipelines for structured and unstructured data Implement real-time and batch data processing on Enterprise Data Fabric Optimize big data performance using Apache Spark and AWS stack Enable enterprise-wide data discovery, governance, and CI/CD integration Role Responsibilities: Collaborate across teams to align data engineering with business strategy Ensure data security, compliance, and access control in distributed environments Build metadata-driven data pipelines with version control and monitoring Integrate diverse data sources into a unified, governed architecture

Posted 1 month ago

Apply

3.0 - 13.0 years

3 - 13 Lacs

Hyderabad, Telangana, India

On-site

Key Deliverables: Design and deploy scalable ETL/ELT pipelines for structured and unstructured data Implement real-time and batch data processing on Enterprise Data Fabric Optimize big data performance using Apache Spark and AWS stack Enable enterprise-wide data discovery, governance, and CI/CD integration Role Responsibilities: Collaborate across teams to align data engineering with business strategy Ensure data security, compliance, and role-based access across systems Drive performance tuning and metadata-driven architecture development Adopt and implement emerging data technologies and DevOps practices

Posted 1 month ago

Apply

5.0 - 9.0 years

15 - 30 Lacs

Pune, Bengaluru

Work from Office

Hiring for Appian developer for Wipro limited *Excellent English Communication *5+ years of hand on experience in Appian BPM *Knowledge or working experience with SAP or Enterprise system * Notice period - Immediate to 60 Days HR Kanchan 9691001643 Required Candidate profile 1. Appian developer- L2 certification is mandatory (B3) 2. Appian developer- L3 certification is mandatory (C1) Lead or support solution design discussions with onshore leads based in the UK/NL

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Pune, Bengaluru

Work from Office

Hiring for Appian developer for Wipro limited *Excellent English Communication *5+ years of hand on experience in Appian BPM *Knowledge or working experience with SAP or Enterprise system * Notice period - Immediate to 60 Days HR Kanchan 9691001643 Required Candidate profile Experience with Appian design patterns, objects, interfaces, and best practices Ability to work independently and replace senior consultants effectively

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Work from Office

Hiring for Appian developer for Wipro limited *Excellent English Communication *5+ years of hand on experience in Appian BPM *Knowledge or working experience with SAP or Enterprise system * Notice period - Immediate to 60 Days HR Kanchan 9691001643 Required Candidate profile Experience with Appian design patterns, objects, interfaces, and best practices Ability to work independently and replace senior consultants effectively

Posted 1 month ago

Apply

3.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

Senior Manager Information Systems Automation What you will do We are seeking a hands-on , experienced and dynamic Technical Infrastructure Automation Manager to lead and manage our infrastructure automation initiatives. The ideal candidate will have a strong hands-on background in IT infrastructure, cloud services, and automation tools, along with leadership skills to guide a team towards improving operational efficiency, reducing manual processes, and ensuring scalability of systems. This role will lead a team of engineers across multiple functions, including Ansible Development, ServiceNow Development, Process Automation, and Site Reliability Engineering (SRE). This role will be responsible for ensuring the reliability, scalability, and security of automation services. The Infrastructure Automation team will be responsible for automating infrastructure provisioning, deployment, configuration management, and monitoring. You will work closely with development, operations, and security teams to drive automation solutions that enhance the overall infrastructures efficiency and reliability. This role demands the ability to drive and deliver against key organizational strategic initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: Automation Strategy & Leadership : Lead the development and implementation of infrastructure automation strategies. Collaborate with key collaborators (DevOps, IT Operations, Security, etc.) to define automation goals and ensure alignment with company objectives. Provide leadership and mentorship to a team of engineers, ensuring continuous growth and skill development. Infrastructure Automation : Design and implement automation frameworks for infrastructure provisioning, configuration management, and orchestration (e.g., using tools like Terraform, Ansible, Puppet, Chef, etc.). Manage and optimize CI/CD pipelines for infrastructure as code (IaC) to ensure seamless delivery and updates. Work with cloud providers (AWS, Azure, GCP) to implement automation solutions for managing cloud resources and services. Process Improvement : Identify areas for process improvement by analyzing current workflows, systems, and infrastructure operations. Create and implement solutions to reduce operational overhead and increase system reliability, scalability, and security. Automate and streamline recurring tasks, including patch management, backups, and system monitoring. Collaboration & Communication : Collaborate with multi-functional teams (Development, IT Operations, Security, etc.) to ensure infrastructure automation aligns with business needs. Regularly communicate progress, challenges, and successes to management, offering insights on how automation is driving efficiencies. Documentation & Standards : Maintain proper documentation for automation scripts, infrastructure configurations, and processes. Develop and enforce best practices and standards for automation and infrastructure management. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree with 8-10 years of experience in Observability operation, with at least 3 years in management OR Bachelor's degree with 10-14years of experience in Observability Operations, with at least 4 years in management OR Diploma with 14-18 years of experience in Observability Operations, with at least 5 years in management 12+ years of experience in IT infrastructure management, with at least 4+ years in a leadership or managerial role. Strong expertise in automation tools and frameworks such as Terraform, Ansible, Chef, Puppet, or similar. Proficiency in scripting languages (e.g., Python, Bash, PowerShell). Hands-on experience with cloud platforms (AWS) and containerization technologies (Docker, Kubernetes). Hands-on of Infrastructure as Code (IaC) principles and CI/CD pipeline implementation. Experience with ServiceNow Development and Administration Solid understanding of networking, security protocols, and infrastructure design. Excellent problem-solving skills and the ability to troubleshoot complex infrastructure issues. Strong leadership and communication skills, with the ability to work effectively across teams. Professional Certifications (Preferred): ITIL or PMP Certification Red Hat Certified System Administrator Service Now Certified System Administrator AWS Certified Solutions Architect Preferred Qualifications: Strong experience with Ansible, including playbooks, roles, and modules. Strong experience with infrastructure-as-code concepts and other automation tools like Terraform or Puppet. Strong understanding of user-centered design and building scalable, high-performing web and mobile interfaces on the ServiceNow platform Proficiency with both Windows and Linux/Unix-based operating systems. Knowledge of cloud platforms (AWS, Azure, Google Cloud) and automation techniques in those environments. Familiarity with CI/CD tools and processes, particularly with integration of Ansible in pipelines. Understanding of version control systems (Git). Strong troubleshooting, debugging, and performance optimization skills. Experience with hybrid cloud environments and multi-cloud strategies. Familiarity with DevOps practices and tools. Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Excellent leadership and team management skills. Change management expertise Crisis management capabilities Strong presentation and public speaking skills Analytical mindset with a focus on continuous improvement. Detail-oriented with the capacity to manage multiple projects and priorities. Self-motivated and able to work independently or as part of a team. Strong communication skills to effectively interact with both technical and non-technical collaborators. Ability to work effectively with global, virtual teams Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.

Posted 1 month ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Req ID: 323226 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Solution Architect Sr. Advisor to join our team in Bengaluru, India, Karn?taka (IN-KA), India (IN). Key Responsibilities: Design data platform architectures (data lakes, lakehouses, DWH) using modern cloud-native tools (e.g., Databricks, Snowflake, BigQuery, Synapse, Redshift). Architect data ingestion, transformation, and consumption pipelines using batch and streaming methods. Enable real-time analytics and machine learning through scalable and modular data frameworks. Define data governance models, metadata management, lineage tracking, and access controls. Collaborate with AI/ML, application, and business teams to identify high-impact use cases and optimize data usage. Lead modernization initiatives from legacy data warehouses to cloud-native and distributed architectures. Enforce data quality and observability practices for mission-critical workloads. Required Skills: 10+ years in data architecture, with strong grounding in modern data platforms and pipelines. Deep knowledge of SQL/NoSQL, Spark, Delta Lake, Kafka, ETL/ELT frameworks. Hands-on experience with cloud data platforms (AWS, Azure, GCP). Understanding of data privacy, security, lineage, and compliance (GDPR, HIPAA, etc.). Experience implementing data mesh/data fabric concepts is a plus. Expertise in technical solutions writing and presenting using tools such as Word, PowerPoint, Excel, Visio etc. High level of executive presence to be able to articulate the solutions to CXO level executives. Preferred Qualifications: Certifications in Snowflake, Databricks, or cloud-native data platforms. Exposure to AI/ML data pipelines, MLOps, and real-time data applications. Familiarity with data visualization and BI tools (Power BI, Tableau, Looker, etc.). About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 months ago

Apply

13.0 - 21.0 years

45 - 60 Lacs

Hyderabad

Hybrid

Job Description Summary: As a Data Architect, you will play a pivotal role in defining and implementing common data models, API standards, and leveraging the Common Information Model (CIM) standard across a portfolio of products deployed in Critical National Infrastructure (CNI) environments globally. GE Vernova is the leading software provider for the operations of national and regional electricity grids worldwide. Our software solutions range from supporting electricity markets, enabling grid and network planning, to real-time electricity grid operations. In this senior technical role, you will collaborate closely with lead software architects to ensure secure, performant, and composable designs and implementations across our portfolio. Job Description Grid Software (a division of GE Vernova) is driving the vision of GridOS - a portfolio of software running on a common platform to meet the fast-changing needs of the energy sector and support the energy transition. Grid Software has extensive and well-established software stacks that are progressively being ported to a common microservice architecture, delivering a composable suite of applications. Simultaneously, new applications are being designed and built on the same common platform to provide innovative solutions that enable our customers to accelerate the energy transition. This role is for a senior data architect who understands the core designs, principles, and technologies of GridOS. Key responsibilities include: Formalizing Data Models and API Standards : Lead the formalization and standardization of data models and API standards across products to ensure interoperability and efficiency. Leveraging CIM Standards : Implement and advocate for the Common Information Model (CIM) standards to ensure consistent data representation and exchange across systems. Architecture Reviews and Coordination : Contribute to architecture reviews across the organization as part of Architecture Review Boards (ARB) and the Architecture Decision Record (ADR) process. Knowledge Transfer and Collaboration : Work with the Architecture SteerCo and Developer Standard Practices team to establish standard pratcise around data modeling and API design. Documentation : Ensure that data modeling and API standards are accurately documented and maintained in collaboration with documentation teams. Backlog Planning and Dependency Management : Work across software teams to prepare backlog planning, identify, and manage cross-team dependencies when it comes to data modeling and API requirements. Key Knowledge Areas and Expertise Data Architecture and Modeling : Extensive experience in designing and implementing data architectures and common data models. API Standards : Expertise in defining and implementing API standards to ensure seamless integration and data exchange between systems. Common Information Model (CIM) : In-depth knowledge of CIM standards and their application within the energy sector. Data Mesh and Data Fabric : Understanding of data mesh and data fabric principles, enabling software composability and data-centric design trade-offs. Microservice Architecture : Understandig of microservice architecture and software development Kubernetes : Understanding of Kubernetes, including software development in an orchestrated microservice architecture. This includes Kubernetes API, custom resources, API aggregation, Helm, and manifest standardization. CI/CD and DevSecOps : Experience with CI/CD pipelines, DevSecOps practices, and GitOps, especially in secure, air-gapped environments. Mobile Software Architecture : Knowledge of mobile software architecture for field crew operations, offline support, and near-realtime operation. Additional Knowledge (Advantageous but not Essential) Energy Industry Technologies : Familiarity with key technologies specific to the energy industry, such as Supervisory Control and Data Acquisition (SCADA), Geospatial network modeling, etc. This is a critical role within Grid Software, requiring a broad range of knowledge and strong organizational and communication skills to drive common architecture, software standards, and principles across the organization.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies