Jobs
Interviews

946 Metadata Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

20 - 25 Lacs

Bengaluru

Work from Office

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Customer Success Job Details About Salesforce . The SF Data Cloud Architect plays a critical role within Salesforce s Professional Services team, assisting in pre-sales and leading the design and implementation of enterprise-grade Data Management solutions. This position is responsible for architecting scalable solutions across enterprise landscapes using Data Cloud, ensuring data is ready for enterprise AI, applying data governance guardrails as well as supporting enterprise analytics and automation. This role covers ANZ, ASEAN and India markets. The ideal candidate brings deep expertise in data architecture, project lifecycle and Salesforce ecosystem knowledge. Combined with strong soft skills, stakeholder engagement and technical writing ability. You will collaborate with cross-functional teams to shape the future of customers data ecosystem and enable data excellence at scale Key Responsibilities * Salesforce Data Cloud Trusted Advisor: Be a trusted SME for Data Cloud to support and lead project delivery and customer engagements during pre-sales cycle. Including how Data Cloud relates to the success of AI * Architecture Support: Provide Data and System Architecture guidance to Salesforce Account teams and Customers by reviewing proposed architecture. Also peer review project effort estimates, scope and delivery considerations * Project Delivery: Work on cross-cloud project delivery and lead the data and analytics stream, spear-heading Data Cloud Design & Delivery. Ability to work collaboratively with cross-functional teams from Developer to Executive * Data Architecture: Design and guide customers enterprise data architecture aligned to business goals. Highlight the importance of Data Ethics and Privacy by ensuring that customer solutions adhere to relevant regulations and best practices in data security and privacy * Data Cloud Enablement: Lead Data Cloud architecture enablement for key domains and cross cloud teams * Analytics Support: Collaborate with analytics and AI teams to ensure data readiness for advanced analytics, reporting, and AI/ML initiatives * Stakeholder Engagement: Work cross-functionally across multiple Salesforce teams and projects to deliver aligned and trusted data solutions. Facilitate and influence Executive Customer stakeholders while aligning technology strategy to business value and ROI. Build strong relationships with both internal and external teams, contributing to broader goals and growth * Documentation: Create and maintain high-quality architecture blueprints, design documents, standards, and technical guidelines Technical Skills * 15+ years in data architecture or consulting, with solution design and project delivery experience * Deep knowledge in MDM, Data Distribution and Data Modelling concepts * Expertise in data modelling with strong understanding of metadata and lineage * Experience in executing data strategies, landscape architecture assessments and proof-of-concepts * Excellent communication, stakeholder management and presentation skills * Strong technical writing and documentation ability * Basic understanding of Hadoop spark fundamentals is an advantage * Understanding of Data Platforms (example: Snowflake, DataBricks, AWS, GCP, MS Azure) * Experience with tools such as Salesforce Data Cloud, or similar enterprise Data platforms - Hands on deep Data Cloud experience is a strong plus * Working knowledge of enterprise data warehouse, data lake and data hub concepts * Strong understanding of Salesforce Products and functional domains such as Technology, Finance, Telco, Manufacturing and Retail is a positive Expected Qualification * Salesforce Certified Data Cloud Consultant - Highly Preferred * Salesforce Data Architect - Preferred * Salesforce Application Architect - Preferred * AWS Spark/ DL, Az DataBricks, Fabric, Google Cloud, Snowflakes, or similar - Preferred

Posted 3 weeks ago

Apply

15.0 - 20.0 years

30 - 35 Lacs

Bengaluru

Work from Office

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Customer Success Job Details About Salesforce The SF Data Cloud Architect plays a critical role within Salesforce s Professional Services team, assisting in pre-sales and leading the design and implementation of enterprise-grade Data Management solutions. This position is responsible for architecting scalable solutions across enterprise landscapes using Data Cloud, ensuring data is ready for enterprise AI, applying data governance guardrails as well as supporting enterprise analytics and automation. This role covers ANZ, ASEAN and India markets. The ideal candidate brings deep expertise in data architecture, project lifecycle and Salesforce ecosystem knowledge. Combined with strong soft skills, stakeholder engagement and technical writing ability. You will collaborate with cross-functional teams to shape the future of customers data ecosystem and enable data excellence at scale Key Responsibilities * Salesforce Data Cloud Trusted Advisor: Be a trusted SME for Data Cloud to support and lead project delivery and customer engagements during pre-sales cycle. Including how Data Cloud relates to the success of AI * Architecture Support: Provide Data and System Architecture guidance to Salesforce Account teams and Customers by reviewing proposed architecture. Also peer review project effort estimates, scope and delivery considerations * Project Delivery: Work on cross-cloud project delivery and lead the data and analytics stream, spear-heading Data Cloud Design & Delivery. Ability to work collaboratively with cross-functional teams from Developer to Executive * Data Architecture: Design and guide customers enterprise data architecture aligned to business goals. Highlight the importance of Data Ethics and Privacy by ensuring that customer solutions adhere to relevant regulations and best practices in data security and privacy * Data Cloud Enablement: Lead Data Cloud architecture enablement for key domains and cross cloud teams * Analytics Support: Collaborate with analytics and AI teams to ensure data readiness for advanced analytics, reporting, and AI/ML initiatives * Stakeholder Engagement: Work cross-functionally across multiple Salesforce teams and projects to deliver aligned and trusted data solutions. Facilitate and influence Executive Customer stakeholders while aligning technology strategy to business value and ROI. Build strong relationships with both internal and external teams, contributing to broader goals and growth * Documentation: Create and maintain high-quality architecture blueprints, design documents, standards, and technical guidelines Technical Skills * 15+ years in data architecture or consulting, with solution design and project delivery experience * Deep knowledge in MDM, Data Distribution and Data Modelling concepts * Expertise in data modelling with strong understanding of metadata and lineage * Experience in executing data strategies, landscape architecture assessments and proof-of-concepts * Excellent communication, stakeholder management and presentation skills * Strong technical writing and documentation ability * Basic understanding of Hadoop spark fundamentals is an advantage * Understanding of Data Platforms (example: Snowflake, DataBricks, AWS, GCP, MS Azure) * Experience with tools such as Salesforce Data Cloud, or similar enterprise Data platforms - Hands on deep Data Cloud experience is a strong plus * Working knowledge of enterprise data warehouse, data lake and data hub concepts * Strong understanding of Salesforce Products and functional domains such as Technology, Finance, Telco, Manufacturing and Retail is a positive Expected Qualification * Salesforce Certified Data Cloud Consultant - Highly Preferred * Salesforce Data Architect - Preferred * Salesforce Application Architect - Preferred * AWS Spark/ DL, Az DataBricks, Fabric, Google Cloud, Snowflakes, or similar - Preferred Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement

Posted 3 weeks ago

Apply

7.0 - 10.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Role: Oracle EPM Experience: 7-10 Years Location: Hyderabad Required Skills: Oracle EPM & Hyperion Planning and Essbase implementation, developing financial reports and data forms, Advanced knowledge of FDMEE and ODI for automating data and metadata integration, banking or financial services clients preferred. Key Responsibilities: Lead or support end-to-end implementation of Oracle EPM & Hyperion Planning and Essbase solutions (On-Prem). Design and develop financial reports and data forms based on business requirements. Develop and manage workflow processes within the Hyperion suite. Write and maintain business rules to support budgeting, forecasting, and reporting needs. Build and optimize data and metadata load automation using FDMEE and Oracle Data Integrator (ODI) . Collaborate with finance and business stakeholders to translate functional requirements into technical solutions. Conduct system testing, UAT support, and user training sessions. Troubleshoot issues, monitor system performance, and provide ongoing support and enhancements. Qualifications Preferred Qualifications: Bachelor s or Master s degree in Computer Science, Finance, or related field. Oracle certifications in Hyperion or related

Posted 3 weeks ago

Apply

3.0 - 5.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Title: Senior Developer Date: 8 Jul 2025 Location: Bangalore, KA, IN Job Description We are a technology-led healthcare solutions provider. We are driven by our purpose to enable healthcare organizations to be future-ready. We offer accelerated, global growth opportunities for talent that s bold, industrious, and nimble. With Indegene, you gain a unique career experience that celebrates entrepreneurship and is guided by passion, innovation, collaboration, and empathy. To explore exciting opportunities at the convergence of healthcare and technology, check out www.careers.indegene.com Looking to jump-start your career We understand how important the first few years of your career are, which create the foundation of your entire professional journey. At Indegene, we promise you a differentiated career experience. You will not only work at the exciting intersection of healthcare and technology but also will be mentored by some of the most brilliant minds in the industry. We are offering a global fast-track career where you can grow along with Indegene s high-speed growth. We are purpose-driven. We enable healthcare organizations to be future ready and our customer obsession is our driving force. We ensure that our customers achieve what they truly want. We are bold in our actions, nimble in our decision-making, and industrious in the way we work. If this excites you, then apply below. Role : Sr. Backend Engineer - GenAI Services Description: We are looking for passionate Backend Engineers to build scalable and secure APIs that power GenAI systems. You will collaborate with architecture, DevOps, and AI teams to support RAG and LLM-based workflows. Responsibilities Develop APIs for serving LLM results and handling embedding-based search (RAG) Implement queueing, async jobs, caching layers, and modular services Write and test secure Python backend code using FastAPI Integrate vector DBs and retrieval systems into backend pipelines Participate in PR reviews and contribute to platform reliability Must Have Experience: 3-5 years Tech Stack - Python, FastAPI, PostgreSQL, Git - Redis, Docker, RabbitMQ - RAG Systems: Connect to vector DBs for RAG, implement metadata-based Schema RAG integrations Cloud & Deployment AWS (RDS, ECS, Lambda), GCP, or Azure AI Tools & Productivity Stack GitHub Copilot, PR reviewers Security & Compliance Token and session handling, access management, logging, rate limiting Good to have Detail-driven, reliable, quality-focused, fast learner, team collaborator EQUAL OPPORTUNITY

Posted 3 weeks ago

Apply

15.0 - 20.0 years

50 - 55 Lacs

Mumbai

Work from Office

Role : SAP Data Architect Work Mode : Remote Contract : 6months Position Summary : We are seeking an experienced SAP Data Architect with a strong background in enterprise data management, SAP S/4HANA architecture, and data integration. This role demands deep expertise in designing and governing data structures, ensuring data consistency, and leading data transformation initiatives across large-scale SAP implementations. Key Responsibilities : - Lead the data architecture design across multiple SAP modules and legacy systems. - Define data governance strategies, master data management (MDM), and metadata standards. - Architect data migration strategies for SAP S/4HANA projects, including ETL, data validation, and reconciliation. - Develop data models (conceptual, logical, and physical) aligned with business and technical requirements. - Collaborate with functional and technical teams to ensure integration across SAP and nonSAP platforms. - Establish data quality frameworks and monitoring practices. - Conduct impact assessments and ensure scalability of data architecture. - Support reporting and analytics requirements through structured data delivery frameworks (e.g., SAP BW, SAP HANA). Required Qualifications : - 15+ years of experience in enterprise data architecture, with 8+ years in SAP landscapes. - Proven experience in SAP S/4HANA data models, SAP Datasphere, SAP HANA Cloud & SAC and integrating this with AWS Data Lake (S3) - Strong knowledge of data migration tools (SAP Data Services, LTMC / LSMW, BODS, etc.). - Expertise in data governance, master data strategy, and data lifecycle management. - Experience with cloud data platforms (Azure, AWS, or GCP) is a plus. - Strong analytical and communication skills to work across business and IT stakeholders. Preferred Certifications : - SAP Certified Technology Associate SAP S/4HANA / Datasphere - TOGAF or other Enterprise Architecture certifications - ITIL Foundation (for process alignment)

Posted 3 weeks ago

Apply

4.0 - 9.0 years

10 - 14 Lacs

Pune

Work from Office

: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Provide responsive customer service helping internal clients understand and efficiently manage their records management risks Explain our archiving services (both the business value and technical implementation) and respond promptly to inquiries Support the documentation and approval of requirements including record content and metadata Identify and facilitate implementing an efficient solution to meet the requirements Manage expectations and provide regular updates- frequently to senior stakeholders Configure archiving in test environments- will not be coding new functionality but will be making configuration changes maintained in a code repository and deployed with standard tools Support testing ensuring clients have appropriately managed implementation risks Help issue resolution including data issues, environment challenges, and code bugs Promote configurations from test environments to production Work with Production Support to ensure archiving is completed and evidenced Contribute towards a culture of learning and continuous improvement Will partner with teams in multiple location Your skills and experience Delivers against tight deadlines in a fast paced environment Manages others expectations and meets commitments High degree of accuracy and attention to detail Ability to communicate (written and verbal) concisely both business concepts and technical details and to influence partners including senior mangers High analytical capabilities and able to quickly grasp new contexts we support multiple areas of the Bank Expresses opinions while supporting group decisions Ensures deliverables are clearly documented and holds self and others accountable for meeting those deliverables Ability to identify risks at an early stage and implement mitigating strategies Flexibility and willingness to work autonomously and collaboratively Ability to work in virtual teams, agile environment and in matrixed organizations Treats everyone with respect and embraces diversity Bachelors Degree from an accredited college or university desirable Minimum 4 years experience implementing IT solutions in a global financial institution Comfortable with technology (e.g., SQL, FTP, XML, JSON) and a desire and ability to learn new skills as required (e.g., Fabric, Kubernetes, Kafka, Avro, Ansible) Must be an expert in SQL and have Python programming experience. Financial markets and Google Cloud Platform knowledge a plus while curiosity a requirement How well support you . . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 weeks ago

Apply

15.0 - 20.0 years

45 - 50 Lacs

Kolkata

Remote

Contract : 6months Position Summary : We are seeking an experienced SAP Data Architect with a strong background in enterprise data management, SAP S/4HANA architecture, and data integration. This role demands deep expertise in designing and governing data structures, ensuring data consistency, and leading data transformation initiatives across large-scale SAP implementations. Key Responsibilities : - Lead the data architecture design across multiple SAP modules and legacy systems. - Define data governance strategies, master data management (MDM), and metadata standards. - Architect data migration strategies for SAP S/4HANA projects, including ETL, data validation, and reconciliation. - Develop data models (conceptual, logical, and physical) aligned with business and technical requirements. - Collaborate with functional and technical teams to ensure integration across SAP and nonSAP platforms. - Establish data quality frameworks and monitoring practices. - Conduct impact assessments and ensure scalability of data architecture. - Support reporting and analytics requirements through structured data delivery frameworks (e.g., SAP BW, SAP HANA). Required Qualifications : - 15+ years of experience in enterprise data architecture, with 8+ years in SAP landscapes. - Proven experience in SAP S/4HANA data models, SAP Datasphere, SAP HANA Cloud & SAC and integrating this with AWS Data Lake (S3) - Strong knowledge of data migration tools (SAP Data Services, LTMC / LSMW, BODS, etc.). - Expertise in data governance, master data strategy, and data lifecycle management. - Experience with cloud data platforms (Azure, AWS, or GCP) is a plus. - Strong analytical and communication skills to work across business and IT stakeholders. Preferred Certifications : - SAP Certified Technology Associate SAP S/4HANA / Datasphere - TOGAF or other Enterprise Architecture certifications - ITIL Foundation (for process alignment)

Posted 3 weeks ago

Apply

10.0 - 15.0 years

18 - 22 Lacs

Chennai

Remote

Contract : 6months Position Summary : We are seeking an experienced SAP Data Architect with a strong background in enterprise data management, SAP S/4HANA architecture, and data integration. This role demands deep expertise in designing and governing data structures, ensuring data consistency, and leading data transformation initiatives across large-scale SAP implementations. Key Responsibilities : - Lead the data architecture design across multiple SAP modules and legacy systems. - Define data governance strategies, master data management (MDM), and metadata standards. - Architect data migration strategies for SAP S/4HANA projects, including ETL, data validation, and reconciliation. - Develop data models (conceptual, logical, and physical) aligned with business and technical requirements. - Collaborate with functional and technical teams to ensure integration across SAP and nonSAP platforms. - Establish data quality frameworks and monitoring practices. - Conduct impact assessments and ensure scalability of data architecture. - Support reporting and analytics requirements through structured data delivery frameworks (e.g., SAP BW, SAP HANA). Required Qualifications : - 15+ years of experience in enterprise data architecture, with 8+ years in SAP landscapes. - Proven experience in SAP S/4HANA data models, SAP Datasphere, SAP HANA Cloud & SAC and integrating this with AWS Data Lake (S3) - Strong knowledge of data migration tools (SAP Data Services, LTMC / LSMW, BODS, etc.). - Expertise in data governance, master data strategy, and data lifecycle management. - Experience with cloud data platforms (Azure, AWS, or GCP) is a plus. - Strong analytical and communication skills to work across business and IT stakeholders. Preferred Certifications : - SAP Certified Technology Associate SAP S/4HANA / Datasphere - TOGAF or other Enterprise Architecture certifications - ITIL Foundation (for process alignment)

Posted 3 weeks ago

Apply

15.0 - 20.0 years

18 - 22 Lacs

Bengaluru

Remote

Position Summary : We are seeking an experienced SAP Data Architect with a strong background in enterprise data management, SAP S/4HANA architecture, and data integration. This role demands deep expertise in designing and governing data structures, ensuring data consistency, and leading data transformation initiatives across large-scale SAP implementations. Key Responsibilities : - Lead the data architecture design across multiple SAP modules and legacy systems. - Define data governance strategies, master data management (MDM), and metadata standards. - Architect data migration strategies for SAP S/4HANA projects, including ETL, data validation, and reconciliation. - Develop data models (conceptual, logical, and physical) aligned with business and technical requirements. - Collaborate with functional and technical teams to ensure integration across SAP and nonSAP platforms. - Establish data quality frameworks and monitoring practices. - Conduct impact assessments and ensure scalability of data architecture. - Support reporting and analytics requirements through structured data delivery frameworks (e.g., SAP BW, SAP HANA). Required Qualifications : - 15+ years of experience in enterprise data architecture, with 8+ years in SAP landscapes. - Proven experience in SAP S/4HANA data models, SAP Datasphere, SAP HANA Cloud & SAC and integrating this with AWS Data Lake (S3) - Strong knowledge of data migration tools (SAP Data Services, LTMC / LSMW, BODS, etc.). - Expertise in data governance, master data strategy, and data lifecycle management. - Experience with cloud data platforms (Azure, AWS, or GCP) is a plus. - Strong analytical and communication skills to work across business and IT stakeholders. Preferred Certifications : - SAP Certified Technology Associate SAP S/4HANA / Datasphere - TOGAF or other Enterprise Architecture certifications - ITIL Foundation (for process alignment)

Posted 3 weeks ago

Apply

8.0 - 13.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: We are seeking an experienced MDM Engineer with 8-12 years of experience to lead development and operations of our Master Data Management (MDM) platforms, with hands-on experience in data engineering experience. This role will involve handling the backend data engineering solution within MDM team. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience. Candidate must have experience on technologies like (SQL, Python, PySpark, Databricks, AWS, API Integrations etc). Roles & Responsibilities: Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM logic Implement match/merge algorithms and survivorship rules using Informatica MDM or Reltio APIs Build and maintain Delta Lake tables with schema evolution and versioning for master data domains Use AWS services like S3, Glue, Lambda, and Step Functions for orchestrating MDM workflows Automate data quality checks using IDQ or custom PySpark validators with rule-based profiling Integrate external enrichment sources (e. g. , D&B, LexisNexis) via REST APIs and batch pipelines Design and deploy CI/CD pipelines using GitHub Actions or Jenkins for Databricks notebooks and jobs Monitor pipeline health using Databricks Jobs API, CloudWatch, and custom logging frameworks Implement fine-grained access control using Unity Catalog and attribute-based policies for MDM datasets Use MLflow for tracking model-based entity resolution experiments if ML-based matching is applied Collaborate with data stewards to expose curated MDM views via REST endpoints or Delta Sharing Basic Qualifications and Experience: 8 to 13 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced proficiency in PySpark for distributed data processing and transformation Strong SQL skills for complex data modeling, cleansing, and aggregation logic Hands-on experience with Databricks including Delta Lake, notebooks, and job orchestration Deep understanding of MDM concepts including match/merge, survivorship, and golden record creation Experience with MDM platforms like Informatica MDM or Reltio, including REST API integration Proficiency in AWS services such as S3, Glue, Lambda, Step Functions, and IAM Familiarity with data quality frameworks and tools like Informatica IDQ or custom rule engines Experience building CI/CD pipelines for data workflows using GitHub Actions, Jenkins, or similar Knowledge of schema evolution, versioning, and metadata management in data lakes Ability to implement lineage and observability using Unity Catalog or third-party tools Comfort with Unix shell scripting or Python for orchestration and automation Hands on experience on RESTful APIs for ingesting external data sources and enrichment feeds Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e. g. Informatica, Reltio etc) Any Data Analysis certification (SQL, Python, PySpark, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GCF Level 05A .

Posted 3 weeks ago

Apply

7.0 - 9.0 years

9 - 11 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About the role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor s or master s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge and Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI #LI-DS1

Posted 3 weeks ago

Apply

1.0 - 2.0 years

3 - 4 Lacs

Gurugram

Work from Office

About the Role: We are seeking a highly skilled Marketing Automation & Technical SEO Specialist who can implement, manage, and automate SEO processes directly into our internal dashboard This role requires a unique blend of SEO expertise , automation know-how , and API/integration skills . Key Responsibilities: Design and implement automated SEO workflows (e.g., keyword rank tracking, site audits, broken link detection, schema validation, content gap analysis). Set up SEO reporting inside our CRM/dashboard. Integrate third-party SEO tools (e.g., Ahrefs, SEMrush, Google Search Console, Screaming Frog) with custom dashboards via APIs. Work with developers to implement technical SEO recommendations. Automate on-page SEO audits, metadata checks, sitemap monitoring, etc. Ensure page speed optimization, canonicalization, and other technical SEO aspects. Requirements: Proven experience in technical SEO and marketing automation . Hands-on experience with On - Page Seo, Off - Page Seo Marketing Automation flow , CRM integration Familiarity with APIs (especially GSC, GA4, Ahrefs and their API s etc.). Strong understanding of schema markup,Technical audits, Keyword Research and Ranking, etc. Bonus: Experience working with CRM platforms Nice-to-Have: Built or contributed to a custom SEO dashboard Familiarity with AI tools for content optimization or SEO . Experience in SaaS or digital product companies. Skills : Marketing Automation , Technical SEO , On Page Off Page , Seo Specialist, SEO Reporting, CRM Specialist, CRM Integration, Ahrefs, SEMrush, Google Search Console, Screaming Frog, Metadata Checks, Sitemap monitoring, Keyword Rank Tracking, Site Audits, Broken Link Detection, Schema Validation, Content Gap Analysis, Marketing Automation flow Etc Experience : 1-2 Years

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 6 Lacs

Bengaluru

Work from Office

Req ID: 331269 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Informatica Admin to join our team in Bangalore, Karn taka (IN-KA), India (IN). Informatica Cloud Data Governance & Catalog (CDGC): Glossary creation, metadata management, data classification Data lineage, policy definition, and domain configuration Informatica Administration: User/role management, Secure Agent installation & maintenance Job monitoring, repository backups, system troubleshooting Environment configuration and version upgrades Informatica Data Quality (IDQ): Data profiling, rule specification, transformations Scorecards, DQ metrics, accuracy, and completeness checks Exception handling and remediation Additionally, it would be beneficial if the candidate has knowledge and experience in: Scripting: Shell scripting (Bash/Korn), Python scripting for automation Experience in building monitoring and housekeeping scripts Cloud Knowledge: Familiarity with Azure, AWS, or GCP Working with cloud-hosted Informatica services DevOps & CI/CD: Azure DevOps: Creating and managing pipelines, repos, and releases Integration with Informatica for automated deployments

Posted 3 weeks ago

Apply

18.0 - 20.0 years

5 - 6 Lacs

Coimbatore

Work from Office

Roles and Responsibilities: Write & deploy Solidity smart contracts Connect the frontend (React) with smart contracts Optimize gas usage, ensure upgradeability Collaborate with the frontend/dev team Manage CI/CD for smart contract deployment Support audits and test coverage Candidate Profile: Solidity + Smart contract development (2 4+ yrs) Deep knowledge of ERC-721, ERC-1155, EIP-2981 Experience with Web3.js or Ethers.js Hardhat / Foundry / OpenZeppelin stack Mainnet/testnet deployment experience Self-driven, team-friendly, and reliable Bonus Points: Multi-chain experience (Polygon, Arbitrum, etc.). Familiar with OpenSea SDK, Moralis, and Alchemy. NFT governance, fractional NFTs, or DAOs. Backend experience with Node.js (for metadata/server logic) Why Should You? Excellent working atmosphere Salary and bonus always paid on-time You work for a company that has continuously grown for past 18+ years Very supportive senior management And lots more

Posted 3 weeks ago

Apply

8.0 - 10.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Role Purpose The purpose of the role is to facilitate visual interpretation of data from multiple sources and use this information to develop data driven solutions as per the clients requirements. Do 1. Develop valuable insights from multiple data source as per client requirements a. Customer engagement and requirements gathering i. Understand customer needs and objectives, technology trends and requirements to define how data will be seen as final output ii. Develop wireframes, prototypes, use cases in order to demonstrate the final data output as is required by customer iii. Analyse, propose and implement the data technology and tools used for data visualization iv. Provide solutioning of RFPs received from clients and ensure the final data output is as per business needs v. Validate the solution/ prototype from technology, cost structure and customer differentiation point of view b. Design and Implementation of data visual aspects i. Architect and build data visualization capabilities to produce classical BI dashboards and solutions ii. Create the solutions by using a variety of data mining/data analysis methods, variety of data tools, data models and data semantics iii. Contribute to the design and implementation of the data platform architecture related to data visualization needs iv. Collaborate with other data architects to establish and run a data governance processes v. Manage metadata, semantic layer data on data domains and other enterprise data assets vi. Identify problem areas and perform root cause analysis of overall data flow and provide relevant solutions to the problem c. Enable Pre-Sales Team i. Support pre-sales team while presenting the entire data design and its principles to the client ii. Negotiate, manage and coordinate with the client teams to ensure all requirements are met and create visual data output as proposed iii. Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 2. Capability Building and Team Management a. Ensure completion of necessary trainings and certifications b. Develop and present a point of view of Wipro on data visualization concepts and architect by writing white papers, blogs etc. c. Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) d. Mentor developers, designers and Junior architects for their further career development and enhancement e. Anticipate new talent requirements as per the market/ industry trends or client requirements f. Hire adequate and right resources for the team g. Contribute to the data visualization practice by conducting selection interviews etc Deliver No Performance Parameter Measure 1. Project Delivery Quality of design/ architecture, delivery as per cost, quality and timeline. Mandatory Skills: Business Analyst/ Data Analyst(Maps). Experience8-10 Years.

Posted 3 weeks ago

Apply

8.0 - 11.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Data Science Associate Advisor - HIH - Evernorth ABOUT EVERNORTH: Evernorth exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don t, won t or can t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview In this role you will develop code to feed data to machine learning and statistical modeling platforms. This role will be responsible for the Extraction, Transformation, and Loading of data points from source to destination models. The data science advisor will collaborate with other data scientists and reporting teams across Evernorth to provide a streamlined way to productionize and locally test modeling. The role will also have numerous opportunities to automate and innovate the ways in which modeling is done and consult with business partners on best practices. Responsibilities Locate, extract, manipulate, and organize data from operational source systems in support of analytic tool development (SQL, Microsoft SSIS, Python) Create and manage Postgres and SQL Server entities for use in Data Science modeling and reporting Proficient usage of SQL data sources and database management like Hadoop, Oracle, and SQL servers Partner with varying levels of operations and resource management leadership to understand challenges, goals, and pain points, designing analytic solutions to address them Build processes supporting data transformation, data structures, metadata, dependency and workload management Help develop and maintain code standards and repositories Qualifications 8 - 11 years building and optimizing big data data pipelines, architectures and data sets Strong SQL expertise Experience with the use of Python for deployment Experience with Git (or equivalent) Experience with Python, Postgres and SSIS highly desired Solution design and troubleshooting skills Ability to extrapolate data into information to drive process improvements Ability to quickly learn how to use new software applications Comfortable working in environments with varying levels of ambiguity, complexity, uncertainty, and change Location & Hours of Work Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 3 weeks ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement Informatica Data Quality solutions.- Collaborate with cross-functional teams to analyze and address data quality issues.- Create and maintain documentation for data quality processes.- Participate in data quality improvement initiatives.- Assist in training junior professionals in data quality best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data quality principles.- Experience with ETL processes and data integration.- Knowledge of data profiling and cleansing techniques.- Familiarity with data governance and metadata management. Additional Information:- The candidate should have a minimum of 3 years of experience in Informatica Data Quality.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Mumbai

Work from Office

The ENS services team at Burns & McDonnell India is building a team to support the US offices. The work that the ENS team supports include air quality services, remediation services, natural & cultural resources. Burns & McDonnell India is looking for a candidate to join our environmental Services (ENS) Group in Bengaluru office to provide Geographic Information System (GIS) capability to BMcD projects mainly in USA. The Geographic Information Systems (GIS) Trainee will assist in developing, updating, analyzing, and managing GIS data for a wide variety of professional services. This position will assist in a variety GIS analysis, GIS data development, GIS data maintenance and data preparation in Desktop as well as ArcGIS Online for use in public safety GIS systems. The Assistant GIS Specialist will work closely within functional teams to deliver GIS data and GIS services to our clients, with a strong emphasis on teamwork, customer commitment, sense of urgency, and continuous improvement. Assist with GIS, data, and mapping for the Engineering, and Surveying departments while supporting both internal and external project teams. Assist with GIS mapping and data collection tasks using ArcGIS Online feature layers and mobile apps like Collector/Field Maps and Survey123 for ArcGIS Utilize various software packages and information from various sources (MS Access databases, MS Excel spreadsheets and documents such as deeds, field notes etc.) to create GIS maps to support field personnel as well as for deliverables. Assist with digital feature extraction from multiple data sources. Assist in the projections and transformations for project deliverables. Assist in the creation and updating of new and existing GIS maps and map layers and GPS data edition and representation using ArcGIS in support of field surveys. Assist with GPS data collection and post-processing for high accuracy data specifications. Assist with GIS metadata creation. Assist with land survey records search, ownership data research from various sources. Assist with georeferenced Raster and Vector data. All other duties as assigned. Qualifications Bachelors degree in GIS, geography, environmental science, or closely related natural science field, with masters degree in similar field a plus. ArcGIS Desktop 10.1 or higher; ArcGIS Pro experience is preferrable over ArcMap. Basic knowledge of ArcGIS Online and Esri suite of mobile apps. GIS skills, including data analysis abilities. Excellent written and verbal communication skills. Strong analytical and problem-solving skills. Proficient computer skills including Microsoft Office suite Job Engineering Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 252362

Posted 3 weeks ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Deep understanding of Guidewire framework, implementation, architecture and components. Must have experience in GuideWire Billing Center 10.0 version+ Well versed in development streams - Configuration/Integration/Both Strong knowledge in Guidewire platform (Gosu scripting / UI / Data Model) Implementation of Rules (including UI Validation) using GOSU scripting. Metadata Configuration of Rules (including UI Validation). Integration with External Systems and Guidewire Platform supported integration techniques.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Warangal, Hyderabad, Nizamabad

Work from Office

Proficiency in data modeling tools such as ER/Studio, ERwin or similar. Deep understanding of relational database design, normalization/denormalization, and data warehousing principles. Experience with SQL and working knowledge of database platforms like Oracle, SQL Server, PostgreSQL, or Snowflake. Strong knowledge of metadata management, data lineage, and data governance practices. Understanding of data integration, ETL processes, and data quality frameworks. Ability to interpret and translate complex business requirements into scalable data models. Excellent communication and documentation skills to collaborate with cross-functional teams.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies