Jobs
Interviews

70 Data Catalog Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

Role Overview: As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models using Collibra Data Governance. Key Responsibilities: - Knowledge of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra - Design of Data Governance Organization including steering committee, data governance office, stewardship layer, and other working groups - Setup people and processes including relevant roles, responsibilities, and controls, data ownership, workflows, and common processes Qualification Required: - Minimum 5 years of experience in Data Governance of a wide variety of data types and sources including HDFS, S3, Kafka, Cassandra, Hive, HBase, Elastic Search - Experience in working with Collibra operating model, workflow BPMN development, and integrating various applications or systems with Collibra - Professional experience working as SME in data governance, metadata management, and data catalog solutions, specifically on Collibra Data Governance Additional Details: The educational qualification required for this role is a Bachelor's degree in Computer Science.,

Posted 1 day ago

Apply

10.0 - 17.0 years

30 - 40 Lacs

madurai

Remote

Dear Candidate, Greetings of the day!! My name is Arumugam Veera, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on below platform. LinkedIn https://www.linkedin.com/in/arumugamv/ Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra Clients Vision is our Mission. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Job Title: GCP Data Engineer/Lead/Architect Job Location: Madurai/chennai - Remote Experience: 4-20 Years Notice Period: Immediate Mode: Remote (Initial 15 Days or 1 month need to be work from madurai office) Job Summary We are seeking a hands-on GCP Data Engineer with deep expertise in real-time streaming data architectures to help design, build, and optimize data pipelines in our Google Cloud Platform (GCP) environment. The ideal candidate will have strong architectural vision and be comfortable rolling up their sleeves to build scalable, low-latency streaming data pipelines using Pub/Sub, Dataflow (Apache Beam) , and BigQuery . Key Responsibilities Architect and implement end-to-end streaming data solutions on GCP using Pub/Sub , Dataflow , and BigQuery . Design real-time ingestion, enrichment, and transformation pipelines for high-volume event data. Work closely with stakeholders to understand data requirements and translate them into scalable designs. Optimize streaming pipeline performance, latency, and throughput. Build and manage orchestration workflows using Cloud Composer (Airflow) . Drive schema design, partitioning, and clustering strategies in BigQuery for both real-time and batch datasets. Define SLAs, monitoring, logging, and alerting for streaming jobs using Cloud Monitoring , Error Reporting , and Stackdriver . Experience with the data modeling. Ensure robust security, encryption, and access controls across all data layers. Collaborate with DevOps for CI/CD automation of data workflows using Terraform , Cloud Build , and Git . Document streaming architecture, data lineage, and deployment runbooks. Required Skills & Experience 5+ years of experience in data engineering or architecture. 3+ years of hands-on GCP data engineering experience. Strong expertise in: Google Pub/Sub Dataflow (Apache Beam) BigQuery (including streaming inserts) Cloud Composer (Airflow) Cloud Storage (GCS) Solid understanding of streaming design patterns , exactly-once delivery , and event-driven architecture . Deep knowledge of SQL and NoSQL data modeling. Hands-on experience with monitoring and performance tuning of streaming jobs. Experience using Terraform or equivalent for infrastructure as code. Familiarity with CI/CD pipelines for data workflows. Arumugam Veera Manager - Talent Acquisition & Business Development LinkedIn: Techmango Technology Services MyLinkedIn: Arumugam Veera Website: www.techmango.net Office Locations: USA - Atlanta, GA | UAE - Dubai | India - Chennai, Trichy & Madurai

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

Role Overview: You will be part of the DQM team at AIM, a global community focused on driving data-driven transformation across Citi. As a member of the team, you will play a crucial role in managing the implementation of data quality measurement programs for the US region's retail consumer bank. Key Responsibilities: - Execute business data quality measurements in alignment with regulatory programs such as CCAR and AML - Design data quality rules, test and validate them, and identify critical data elements in various systems - Standardize data definitions and ensure consistency in measurements across different systems, products, and regions - Publish monthly/quarterly scorecards at the product level and prepare executive summary reports for senior management - Identify defects, investigate root causes for issues, and follow up with stakeholders for resolution within SLAs - Assist in audit processes by identifying control gaps, policy breaches, and providing data evidence for audit completion Qualifications Required: - Strong analytical skills with the ability to analyze and visualize data, formulate analytical methodologies, and identify trends and patterns for generating actionable business insights - Proficiency in tools like SAS or SQL, and MS Excel is preferred - Good understanding of data definitions, data discovery, data quality framework, data governance, and data warehouse knowledge - Soft skills including the ability to solve complex business problems, excellent communication and interpersonal skills, good process management skills, and the ability to work effectively in teams - Educational background in MBA, Mathematics, Information Technology, Computer Applications, or Engineering from a premier institute; BTech/B.E in Information Technology, Information Systems, or Computer Applications - Post Graduate degree in Computer Science, Mathematics, Operations Research, Econometrics, Management Science, or related fields is preferred - 1 to 2 years of hands-on experience in delivering data quality solutions Additional Company Details: No additional company details were provided in the job description.,

Posted 4 days ago

Apply

4.0 - 6.0 years

0 Lacs

gurgaon, haryana, india

On-site

Job Description At American Express, our culture is built on a 175-year history of innovation, shared and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Data Technology (Mar Tech Data Team) within American Express Technologies. This team is specialized in creating and expanding suite of data and insight solutions to power the customer marketing ecosystem. The team creates and manages various batch/Realtime marketing data products that fuels the Customer Marketing Platforms. Being part of the team, you will get numerous opportunities to utilize and learn bigdata and GCP cloud technologies. Job Responsibilities: Responsible for delivering the features or software functionality independently and reliably. Develop technical design documentation. Functions as core member of an agile team by contributing to software builds through consistent development practices with respect to tools, common components, and documentation. Performs hands-on ETL development for marketing data applications. Participate in code reviews and automated testing. Helps other junior members of the team deliver. Demonstrates analytical thinking - recommends improvements, best practices and conducts experiments to prove/disprove them Provides continuous support for ongoing application availability. Learns, understands, participates fully in all team ceremonies, including work breakdown, estimation, and retrospectives. Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software. High energy demonstrated, willingness to learn new technologies and takes pride in how fast they develop working software. Minimum Qualifications: Bachelor's Degree with minimum 4+ years of overall software design and development experience. Expert in SQL and Data warehousing concepts. Hands-on expertise with cloud platforms, ideally Google Cloud Platform (GCP) Working knowledge of data storage solutions like Big Query or Cloud SQL and data engineering tools like AirFlow or Cloud Workflows. Experience with other GCP services like Cloud Storage, Pub/Sub, or Data Catalog. Familiarity with Agile or other rapid application development methods. Hands on experience with one or more programming languages (Java, Python). Hands-on expertise with software development in Big Data (Hadoop, MapReduce, Spark, HIVE). Experience with CICD pipelines, Automated test frameworks, DevOps and source code management tools (XLR, Jenkins, Git, Sonar, Stash, Maven, Jira, Confluence, Splunk etc.). Knowledge of various Shell Scripting tools and ansible will be added advantage. Strong communication and analytical skills including effective presentation skills We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

hyderabad, pune

Work from Office

1. Experience specifically in defining solution, performing POCs, Application design and development, integration migration. 2. Understanding of Google Cloud Platform technologies in the big data and data warehousing space BigQuery, pub-sub, Dataflow, Data Catalog, Composer Airflow, Complex SQL, Stored Procedures) with experience on delivering data architect solutions using BigQuery, Google data flow, Google pub sub, google cloud SQL, google compute engine, etc. Should have at least 5+ year of experience in GCP Google Cloud Platform or similar Cloud Technologies. 3. Experience in Looker with design, development, configuration setup, dashboarding and reporting techniques. 4. Experience in DevOps tools like Jenkins Ansible, JIRA, SonarQube, NexusIQ, Checkmarx, Cyberflow. 5. Strong data architecture background data storage, transformation, event processing, APIs, IAM, security, understanding of cloud identity and access 6. Analyze business needs and help articulate technical solution. 7. Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment. 8. Excellent verbal and written communication skills with the ability to effectively advocate technical solutions to engineering teams and business audiences. 9. Awareness of Agile SAFe principles 10. Knowledge experience on delivering against ESG regulations is preferable.

Posted 1 week ago

Apply

5.0 - 15.0 years

5 - 15 Lacs

hyderabad, telangana, india

On-site

Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

hyderabad, telangana, india

On-site

Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kolkata, west bengal

On-site

You are currently hiring for a "Data Governance Analyst" position in a leading Bank, located in Kolkata with an option for work from the office. To be considered for this role, you should have a minimum of 5+ years of experience in enterprise data governance. You should also have experience working with Data warehouse technologies and data governance solutions such as Data Catalog, MDM, and Data Quality. Additionally, you must possess at least 3+ years of practical experience configuring business glossaries, dashboards, policies, search, and Data maps. In this role, you will be expected to have 3+ years of experience in Data Standardization, Cleanse, transform, and parse data. You will be responsible for developing data standardization mapplets and mappings. It would be advantageous to have a working knowledge of Data Governance tools like Informatica, Collibra, etc. Furthermore, having certifications in DAMA, EDM Council, IQINT would be beneficial. Knowledge of AI/ML and their application in Data Governance is also considered a plus for this position. If you are interested in this opportunity, please share your resume with bhumika@peoplemint.in.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The Natural Language Query (NLQ) platform is an innovative initiative designed to transform the way users interact with data. Our platform leverages advanced natural language processing (NLP) to convert user queries in plain language into executable SQL queries, enabling seamless data retrieval and analysis without the need for SQL expertise. The NLQ platform will be powered by the Enterprise Data Catalog, ensuring comprehensive and accurate metadata definitions for all table and column information. This platform empowers different business units to build external-facing conversational BI chatbots to handle customer requests, while also significantly reducing data exploration efforts for internal data analysts by more than 90%. Key Responsibilities: - Provide Platform-as-a-Service offerings that are easy to consume, scalable, secure, and reliable using open source-based solutions. - Develop and enhance the NLQ Copilot platform, ensuring it meets the evolving needs of multi-tenant environments. - Implement Context builder algorithms by leveraging different prompt engineering techniques to generate 100% accurate SQL as per the customer needs. - Collaborate with downstream clients to integrate business requirements, adding robust guardrails to prevent unauthorized query generation. - Work closely with data scientists, engineers, and product managers to optimize the performance of the NLQ platform. - Utilize cutting-edge NLP / LLM and machine learning techniques to improve the accuracy and efficiency of query transformations. - Ensure the platform's scalability and reliability through rigorous testing and continuous improvement. - Champion the adoption of open infrastructure solutions that are fit for purpose while keeping technology relevant. - Spend 80% of the time writing code in different languages, frameworks, and technology stacks. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualification - 4 yrs+ of experience in architecture design and development of large-scale data management platforms and data application with simple solutions - Bachelor's or masters degree in Computer Science or related technical discipline required - Must have extensive hands-on coding and designing skills on Java/Python for backend - MVC (model-view-controller) for end-to-end development - SQL/NoSQL technology. Familiar with Databases like Oracle, DB2, SQL Server, etc. - Web Services (REST/ SOAP/gRPC) - React/Angular for front-end (UI front-end nice to have) - Expertise in design and management of complex data structures and data processes - Expertise in efficiently leveraging the power of distributed big data systems, including but not limited to Hadoop Hive, Spark, Kafka streaming, etc. - Strong service architecture and development experience with high performance and scalability - Strong on driving for results and self-motivated, strong learning mindset, with good understanding of related advanced/new technology. Keep up with the technology development in the related areas in the industry, which could be leveraged to enhance current architectures and build durable new ones. - Strong leadership and team player. - Strong skills on mentoring/growing junior people Preferred Qualification - Deep knowledge and hands-on experience on big data and cloud computing technologies. - Experience with LLM / GenAI tools / applications and Prompt Engineering - Experience with ETL/ELT tools / applications - Experience with Apache NiFi and Apache Spark for processing large data sets - Experience on Elastic Search - Knowledge on Data Catalog tools - Experience in building Data Pipeline development tools - Experience with Data Governance and Data Quality tools,

Posted 1 week ago

Apply

5.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing and implementing Enterprise Data Management systems, with at least 5 years of experience in this field. Your role will involve building integrated systems throughout the entire life-cycle, including analysis, design, testing, implementation, and support. Preferred qualifications for this position include having 12 years of experience in architecting full-scale enterprise solutions with globally distributed teams. Experience with Graph Mining models and Machine Learning applications is highly desired. You should also have expertise in evaluating, standardizing, and promoting data governance tools related to Data Catalog, Reference and Master Data Management, Data Warehousing, Data Pipelines, Business Intelligence (BI), and Analytics. Your ability to create and maintain data governance workflows to enhance the organization's data management maturity will be crucial. You should also be capable of driving alignment and forming partnerships with executive leadership from teams outside your extended team. As part of the Business Systems Integration team at Google, you will work swiftly to remove obstacles that hinder progress. By identifying inefficient internal processes, you will develop reliable and scalable solutions tailored to the company's size and scope. Your responsibilities will include translating Googler needs into technical specifications, designing and developing systems, and collaborating with Google executives for smooth implementation. Whether optimizing system processes or leveraging Google's suite of products, your efforts will help Googlers work more efficiently. Within Corp Eng, you will contribute to building innovative business solutions that enhance Google's accessibility for all users. As part of Google's IT organization, your role will involve providing end-to-end solutions for various teams within Google, ensuring that they have the necessary tools and platforms to create user-friendly products and services. In essence, you will be supporting Googlers to make Google more helpful for everyone. Your responsibilities will also include facilitating the delivery of Enterprise Data Solutions, contributing to transformation programs, and driving the design, build, and deployment of components in the Enterprise Data Management domain. You will be tasked with developing and maintaining data architecture, models, designs, and integrations across different business processes and IT systems. Additionally, you will design and implement problem-solving and reporting solutions to generate valuable insights for informed business decisions. Identifying opportunities to enhance enterprise data programs by collaborating with executive stakeholders and engaging with engineering and product management teams will also be part of your role.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

The role of a Data Governance Specialist at Hitachi Energy involves being a key enabler in shaping and operationalizing the enterprise-wide data governance framework. Your focus will be on the implementation and evolution of the Data Catalog, Metadata Management, and Data Compliance initiatives to ensure that data assets are trusted, discoverable, and aligned with business value. You will play a critical role in defining and maintaining the roadmap for the Enterprise Data Catalog and Data Supermarket. This includes configuring and executing the deployment of cataloging tools such as metadata management, lineage, and glossary, while ensuring alignment with DAMA-DMBOK principles. Collaboration with Data Owners, Stewards, and Custodians will be essential in defining and enforcing data policies, standards, and RACI mode. Additionally, you will support the Data Governance Council and contribute to the development of governance artifacts like roles, regulations, and KPIs. Partnering with domain experts, you will drive data profiling, cleansing, and validation initiatives to ensure data quality and support remediation efforts across domains. Providing training and support to business users on catalog usage and governance practices will be part of your responsibilities, acting as a liaison between business and IT to ensure data needs are met and governance is embedded in operations. Staying current with industry trends and tool capabilities like Databricks and SAP MDG, you will propose enhancements to governance processes and tooling based on user feedback and analytics. To qualify for this role, you should have a Bachelor's degree in information systems, Data Science, Business Informatics, or a related field, along with 1-3 years of experience in data governance, data management, or analytics roles. Familiarity with DAMA DMBOK2 framework and data governance tools is required, as well as strong communication and collaboration skills to work across business and technical teams. Being proactive, solution-oriented, and eager to learn are important traits for this role, along with autonomy and ambiguity management capacities as competitive advantages. Preference will be given to candidates with CDMP certifications. Joining Hitachi Energy offers a purpose-driven role in a global energy leader committed to sustainability and digital transformation. You can expect mentorship and development opportunities within a diverse and inclusive team, working with cutting-edge technologies and a culture that values integrity, curiosity, and collaboration, in line with Hitachi Energy's Leadership Pillars. Individuals with disabilities requiring accessibility assistance or accommodations in the job application process can request reasonable accommodations through the Hitachi Energy career site to support them during the application process.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at Aptiv, you will play a crucial role in designing, developing, and implementing a cost-effective, scalable, reusable, and secured Ingestion framework. Your primary responsibility will be to work closely with business leaders, stakeholders, and source system Subject Matter Experts (SMEs) to understand and define the business needs, translate them into technical specifications, and ingest data into Google Cloud Platform, specifically BigQuery. You will be involved in designing and implementing processes for data ingestion, transformation, storage, analysis, modeling, reporting, monitoring, availability, governance, and security of high volumes of structured and unstructured data. Your role will involve developing and deploying high-throughput data pipelines using the latest Google Cloud Platform (GCP) technologies, serving as a specialist in data engineering and GCP data technologies, and engaging with clients to understand their requirements and translate them into technical data solutions. You will also be responsible for analyzing business requirements, creating source-to-target mappings, enhancing ingestion frameworks, and transforming data according to business rules. Additionally, you will develop capabilities to support enterprise-wide data cataloging, design data solutions with a focus on security and privacy, and utilize Agile and DataOps methodologies in project delivery. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Data & Analytics, or a similar relevant field, along with at least 4 years of hands-on IT experience in a similar role. You should possess proven expertise in SQL, including subqueries, aggregations, functions, triggers, indexes, and database optimization, as well as deep experience working with various Google Data Products such as BigQuery, Dataproc, Data Catalog, Dataflow, Cloud SQL, among others. Experience in tools like Qlik replicate, Spark, and Kafka is also required. Strong communication skills, the ability to work with globally distributed teams, and knowledge of statistical methods and data modeling are essential for this role. Experience with designing and creating Tableau, Qlik, or Power BI dashboards, as well as knowledge of Alteryx and Informatica Data Quality, will be beneficial. Aptiv provides an inclusive work environment where individuals can grow and develop, irrespective of gender, ethnicity, or beliefs. Safety is a core value at Aptiv, aiming for a world with zero fatalities, zero injuries, and zero accidents. The company offers a competitive health insurance package to support the physical and mental health of its employees. Additionally, Aptiv provides benefits such as personal holidays, healthcare, pension, tax saver scheme, free onsite breakfast, discounted corporate gym membership, and access to transportation options at the Grand Canal Dock location. If you are passionate about data engineering, GCP technologies, and driving value creation through data analytics, Aptiv offers a challenging and rewarding opportunity to grow and make a meaningful impact in a dynamic and innovative environment.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 20 Lacs

bengaluru

Remote

Dear Candidate, Greetings of the day!! My name is Arumugam Veera, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on below platform. LinkedIn https://www.linkedin.com/in/arumugamv/ Mobile Not available on calls & WhatsApp Only: (+91) [+91 6369 002 769] Email: arumugam.veera@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra Clients Vision is our Mission. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Job Title: GCP Data Engineer Job Location: Madurai/chennai - Remote Experience: 4+ Years Notice Period: Immediate Mode: Remote (Initial 15 Days or 1 month need to be work from madurai office) Job Summary We are seeking a hands-on GCP Data Engineer with deep expertise in real-time streaming data architectures to help design, build, and optimize data pipelines in our Google Cloud Platform (GCP) environment. The ideal candidate will have strong architectural vision and be comfortable rolling up their sleeves to build scalable, low-latency streaming data pipelines using Pub/Sub, Dataflow (Apache Beam) , and BigQuery . Key Responsibilities Architect and implement end-to-end streaming data solutions on GCP using Pub/Sub , Dataflow , and BigQuery . Design real-time ingestion, enrichment, and transformation pipelines for high-volume event data. Work closely with stakeholders to understand data requirements and translate them into scalable designs. Optimize streaming pipeline performance, latency, and throughput. Build and manage orchestration workflows using Cloud Composer (Airflow) . Drive schema design, partitioning, and clustering strategies in BigQuery for both real-time and batch datasets. Define SLAs, monitoring, logging, and alerting for streaming jobs using Cloud Monitoring , Error Reporting , and Stackdriver . Experience with the data modeling. Ensure robust security, encryption, and access controls across all data layers. Collaborate with DevOps for CI/CD automation of data workflows using Terraform , Cloud Build , and Git . Document streaming architecture, data lineage, and deployment runbooks. Required Skills & Experience 5+ years of experience in data engineering or architecture. 3+ years of hands-on GCP data engineering experience. Strong expertise in: Google Pub/Sub Dataflow (Apache Beam) BigQuery (including streaming inserts) Cloud Composer (Airflow) Cloud Storage (GCS) Solid understanding of streaming design patterns , exactly-once delivery , and event-driven architecture . Deep knowledge of SQL and NoSQL data modeling. Hands-on experience with monitoring and performance tuning of streaming jobs. Experience using Terraform or equivalent for infrastructure as code. Familiarity with CI/CD pipelines for data workflows. Arumugam Veera Manager - Talent Acquisition & Business Development Mobile & WhatsApp Only: (+91) [+91 6369 002 769] LinkedIn: Techmango Technology Services MyLinkedIn: Arumugam Veera Website: www.techmango.net Office Locations: USA - Atlanta, GA | UAE - Dubai | India - Chennai, Trichy & Madurai

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Governance Specialist at Collibra, you will be responsible for managing and maintaining the Collibra Data Catalog, Data Dictionary, and Business Glossaries. Your role will involve implementing and supporting data quality rules, custom lineage stitching, and metadata ingestion processes. Additionally, you will participate in workflow creation and configuration within Collibra, collaborating with business stakeholders to maintain high-quality metadata. It is essential to apply data governance frameworks to ensure regulatory compliance with standards such as GDPR and HIPAA. Supporting data lifecycle initiatives from creation to archival and working cross-functionally in a Scrum/Agile environment are also key responsibilities. To excel in this role, you should have at least 2 years of hands-on experience with Collibra, preferably in a data steward role, and a minimum of 3 years in a data governance team or function. A strong understanding of data management, data lineage, and best practices in data quality is crucial. Your experience should also include metadata management, lineage stitching, and automating data processes. Excellent communication skills, both technical and business-facing, will be essential for effective collaboration. While not mandatory, it would be beneficial to have experience or understanding of Master Data Management (MDM) and Collibra certifications such as Ranger or Expert. Exposure to tools like Informatica, Azure Purview, Alation, or Tableau would also be advantageous in this role. If you are ready to take on this exciting opportunity, connect with us at shalini.v@saranshinc.com.,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

60 - 100 Lacs

bengaluru

Hybrid

Role: Senior Engineering Manager, Data Platforms Reference Code: HR1175636213662955 Experience: 10-15 years Salary: Confidential (based on experience) Opportunity Type: Hybrid (Bengaluru) Placement Type: Full time Permanent Position (*Note: This is a requirement for one of Uplers Clients) Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. About the Role: Our client runs the largest custom e-commerce large parcel network in the United States, approximately 1.6 million square meters of logistics space. The nature of the network is inherently a highly variable ecosystem that requires flexible, reliable, and resilient systems to operate efficiently. The Data Services & Data Enablement team is looking for smart, passionate and curious people who are excited to help us scale, support, and engineer our database, distributed analytic, and streaming infrastructure. With the broad reach of the technologies we are using you will have the opportunity to grow your network and skills by being exposed to new people and ideas who work on a diverse set of cutting-edge technologies. If you are the type of person who is fascinated by engineering extremely large and diverse data systems and if you are passionate about troubleshooting challenging technical problems in a rapidly innovating cloud environment, you could be a great fit. What You'll Do: Play a key role in developing and driving a multi-year technology strategy for a complex platform. Lead multiple software development teams - architecting solutions at scale to empower the business and owning all aspects of the SDLC: design, build, deliver, and maintain. Directly and indirectly manage several software engineers by providing coaching, guidance, and mentorship to grow the team as well as individuals. Inspire, coach, mentor, and support your team members in their day to day work and their long term professional growth. Attract, onboard, develop and retain diverse top talents, while fostering an inclusive and collaborative team and culture Lead your team and peers by example. As a senior member of the team your methodologies, technical and operational excellence practices, and system designs will help to continuously improve our domain. Identify, propose, and drive initiatives to advance the technical skills, standards, practices, architecture, and documentation of our engineering teams. Facilitate technical debate and decision making with an appreciation for trade-offs. Continuously rethink and push the status quo, even when it challenges your/our established ideas. What youll Need: Results-oriented, collaborative, pragmatic, and continuous improvement mindset. 10+ years of experience in engineering, out of which at least 5-6 years spent in leading highly performant teams. Experience in development of new applications using technologies such as Python, Java or Go. Experience making architectural and design-related decisions for large scale platforms, understanding the tradeoffs between time-to-market vs. flexibility. Significant experience and vocation in managing and enabling peoples growth and performance. Practical experience in hiring and developing engineering teams and culture and leading interdisciplinary teams in a fast-paced agile environment. Capability to communicate and collaborate across the wider organization, influencing decisions with and without direct authority and always with inclusive, adaptable, and persuasive communication. Analytical and decision-making skills that integrate technical and business requirements

Posted 2 weeks ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

bengaluru

Work from Office

We are seeking an experienced IDMC CDGC (Informatica Data Management Cloud - Cloud Data Governance and Catalog) Consultant to design and implement data governance, metadata management, and data catalog solutions. The ideal candidate should have expertise in data governance frameworks, data lineage, metadata management, data cataloging, and Informatica IDMC CDGC. Key Responsibilities: Design and implement Informatica IDMC CDGC (Cloud Data Governance and Catalog) solutions to enable enterprise-wide data governance. Develop and maintain data catalog, metadata repositories, and business glossaries. Configure data lineage, impact analysis, and data classification to improve data visibility and compliance. Define and enforce data governance policies, data ownership, and stewardship models. Work with data quality, data profiling, and compliance frameworks (GDPR, CCPA, HIPAA, etc.). Collaborate with business and IT teams to establish data governance best practices and workflows. Integrate CDGC with various data platforms (Snowflake, AWS, Azure, GCP, Databricks, etc.). Develop custom rules, policies, and workflows for data governance automation. Ensure role-based access control (RBAC) and security best practices for data access. Provide training and support to data stewards, analysts, and business users on IDMC CDGC features. Required Skills & Qualifications: 6+ years of experience in data governance, data cataloging, and metadata management. Hands-on experience with Informatica IDMC CDGC (Cloud Data Governance and Catalog). Strong knowledge of data lineage, data profiling, data quality, and business glossaries. Proficiency in SQL, metadata modeling, and integration of governance tools. Experience with data governance frameworks (DCAM, DAMA-DMBOK, etc.). Strong understanding of data privacy, security, and compliance requirements. Familiarity with ETL tools, cloud data platforms, and API-based integrations. Excellent communication and documentation skills to support data governance initiatives.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

bengaluru

Work from Office

The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

25 - 30 Lacs

navi mumbai

Work from Office

Key Responsibilities: Lead the end-to-end implementation of a data cataloging solution within AWS (preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc.). Establish and manage metadata frameworks for structured and unstructured data assets in the data lake and data warehouse environments. Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark or Glue/EMR customs jobs. Work closely with data engineers, data architects, and analysts to ensure metadata is accurate, relevant, and up to date. Implement role-based access controls and ensure compliance with data privacy and regulatory standards. Create detailed documentation and deliver training/workshops for internal stakeholders on using the data catalog.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

gurugram, bengaluru

Hybrid

Dear candidate, Immediate hiring for Data governance for one of the top MNC Role: Data governance developer Primary Skills: Data governance, meta data management, Linux, My SQL, Collibra, Notice Period : Immediate -15 Days Location :Gurugram, Bangalore Employment type: Fulltime Work Mode : Hybrid Roles and Responsbilities: 3+ years of IT industry experience in working on Data Governance/Data Engineering/data Architecture areas. Experience of working in the production grade environment. Certified in Collibra Learning paths will be a plus. Working experience in data governance, metadata management and data catalog solutions, specifically on Collibra tools Must have: Hands on experience with Linux as well as experience with relational and non- relational database/data sources (MySQL, PostgreSQL). Experience troubleshooting web-based applications. Experience with Java and REST API Excellent knowledge of certificates - SSL, SSO, and LDAP. Knowledge of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra Excellent problem-solving, analytical, written and verbal communication skills. Knowledge on SaaS solution and containerized application will be plus Must have skills - Good hands on and core understanding on Colibra products ( Data Governance , Data Catalog , Data Intelligence Platform)

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

nagpur, maharashtra

On-site

As a part of this role, you will be responsible for conducting descriptive and diagnostic analytics on various data sources. Your tasks will involve applying machine learning tools to both data and metadata in order to enhance data quality. By identifying patterns and trends within data sets, you will contribute to the overall understanding of the information available. Additionally, you will play a key role in developing data profiles for different data tables and elements present in the data lake. Your responsibilities will also include creating data catalog entries and ensuring the quality of metadata within the data catalog. Collaboration with team members to devise data cleansing methods and rules will be essential to maintain data integrity and consistency. This position is a full-time, permanent role that requires your physical presence at the work location.,

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

chennai, tamil nadu, india

On-site

We re hiring a Fullstack Software Engineer to help develop the Cloud Data Hub our AWS-based data platform for managing petabytes of enterprise data. You ll focus on building end-to-end platform features that empower customers to design, deploy, and monitor their own data pipelines. This role spans both backend services (Python) and frontend components (React or similar), delivering intuitive, robust self-service tooling for data engineering workflows. Responsibilities : Design, develop, and maintain backend services and APIs in Python to enable self-service pipeline creation, orchestration, and monitoring. Build frontend features (e.g., dashboards, forms, monitoring views) that help customers easily design and manage data pipelines. Develop frameworks, templates, and tooling using AWS services (Glue, Lambda, Step Functions) to support scalable, secure ETL/ELT workflows. Integrate AWS services (S3, Glue, IAM) to deliver secure and reliable data movement and cataloging. Implement robust monitoring, validation, and alerting features to ensure data quality, integrity, and lineage across customer pipelines. Enable connectivity and integrations with external systems such as Snowflake, Azure, and SAP. Collaborate closely with UX designers and other engineers to deliver intuitive, well-documented user experiences. Participate in architecture discussions, sprint planning, code reviews, and CI/CD workflows. What should you bring along Requirements : Strong experience with Python for backend development, ideally in a cloud-native, serverless context. Proficiency with frontend development in frameworks such as React, Vue, or Angular. Solid understanding of AWS services (Glue, S3, Lambda, IAM, Step Functions) and their use in data workflows. Experience designing and consuming REST APIs. Familiarity with data engineering concepts like data modeling, schema evolution, and partitioning strategies for large-scale storage. Proven ability to build shared tools, libraries, or frameworks that enable other engineers or customers to work more effectively. Commitment to writing well-tested, maintainable code, with experience in unit and integration testing. Familiarity with version control (Git) and CI/CD best practices. Strong communication skills and a collaborative, customer-focused mindset. Must have technical skills Python, ideally in a cloud-native, serverless context React, Vue, or Angular AWS services Glue, S3, Lambda, IAM, Step Functions REST APIs Data engineering concepts like data modeling, schema evolution, and partitioning strategies GitHub Good to have technical skills Experience integrating with Snowflake, Azure, or SAP systems. Familiarity with Glue Data Catalog, AWS Secrets Manager, or other security best practices. Experience building data quality monitoring dashboards or developer tooling for pipeline management. Experience with Infrastructure as Code tools such as Terraform for managing AWS resources.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer - Data Management at Capital One, you will have the opportunity to work on innovative products and solutions in the Data Management domain. You will be responsible for building software solutions to address challenges such as data publishing, data consumption, data governance, and infrastructure management. Your role will involve staying updated on industry trends and practices to drive the development of new projects and product prototypes. To qualify for this position, you must hold a Bachelor's Degree in Computer Science or a related field and have at least 5 years of professional software development experience. Additionally, you should have a minimum of 2 years of experience in building software solutions related to Data Catalog, Metadata Store, Access Control, Policy Enforcement, Data Governance, Data Lineage, Data Monitoring and Alerting, or Data Scanning and Protection. Proficiency in at least one of the following programming languages is required: Golang, Java, Python, Rust, or C++. Moreover, you should have at least 2 years of experience working with a public cloud platform such as AWS, Microsoft Azure, or Google Cloud. Preferred qualifications for this role include a Master's Degree in Computer Science or a related field, at least 7 years of professional software development experience, and experience in developing commercial Data Management products from the ground up. Experience in supporting commercial Data Management products in the cloud with Enterprise clients is also beneficial. Please note that at this time, Capital One will not sponsor a new applicant for employment authorization for this position. Capital One is an equal opportunity employer committed to non-discrimination and promotes a drug-free workplace. If you require accommodation during the application process, please contact Capital One Recruiting. For technical support or questions about the recruiting process, you can reach out to Careers@capitalone.com. Join us in our journey to drive innovation and bring cutting-edge tools to the market. Your expertise and dedication will play a crucial role in shaping the future of data management at Capital One Software.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

At Capgemini Engineering, the world leader in engineering services, a global team of engineers, scientists, and architects collaborate to assist innovative companies in unleashing their potential. With a focus on digital and software technology, we provide unique R&D and engineering services across various industries, ranging from autonomous cars to life-saving robots. Join us for a career filled with opportunities where you can truly make a difference and experience the thrill of diverse challenges each day. Role: Collibra Subject Matter Expert (SME) Experience: 2 to 4 years Location: Gurgaon Job Grade: B1 Notice Period: 0-30 Days Key Responsibilities: - Possessing more than 3 years of IT industry experience in Data Governance, Data Engineering, and Data Architecture domains. - Demonstrated experience in working within a production-grade environment. - Certification in Collibra Learning paths is advantageous. - Proficient in data governance, metadata management, and data catalog solutions, particularly on Collibra tools. - Essential hands-on experience with Linux, along with expertise in relational and non-relational databases such as MySQL and PostgreSQL. - Skilled in troubleshooting web-based applications and familiarity with Java and REST API. - Comprehensive knowledge of certificates including SSL, SSO, and LDAP. - Understanding of Collibra operating model, workflow BPMN development, and integration of various applications/systems with Collibra. - Strong problem-solving skills, analytical abilities, and effective written and verbal communication. - Knowledge of SaaS solutions and containerized applications is a bonus. - Proficiency in Colibra products like Data Governance, Data Catalog, and Data Intelligence Platform. About Capgemini: Capgemini is a global business and technology transformation partner, dedicated to helping organizations accelerate their transition to a digital and sustainable world. With a diverse team of over 340,000 members across 50+ countries, Capgemini leverages its 55-year heritage to deliver end-to-end services and solutions. Focused on AI, generative AI, cloud, data, and deep industry expertise, Capgemini collaborates with clients to unlock the value of technology, addressing a wide range of business needs.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth bringing real positive changes in an increasingly virtual world, driving us beyond generational gaps and disruptions of the future. We are looking forward to hiring Informatica Cloud Services Professionals in the following areas: Client is seeking an experienced Data Quality, Data Governance, and Catalog Team member with over 7+ years of data quality and data cataloguing expertise primarily on Informatica to lead data governance, data catalog, and data quality efforts. In this pivotal role, you will ensure the integrity and quality of critical data within the product, collaborate closely with the Data & Analytics lead, and drive the development of efficient data processes on industry-known tools such as Informatica, Alation, Altan, or Collibra. Key Responsibilities: - 7-8 years of enterprise IICS data integration and management experience working with data management, EDW technologies, and data governance solutions. - Must have 5+ years of hands-on experience in Informatica CDQ, Data Quality experience, including executing at least 2 large Data Governance, Quality projects from inception to production, working as a technology expert. - Practical experience in configuring data governance resources including business glossaries, resources, dashboards, policies, and search. - Strong understanding of data quality, data cataloguing, and data governance best practices. - Thorough understanding of designing and developing data catalog & data assets on industry-known leading tools such as Open-source catalog tool, Informatica Cloud data catalog, Alation, Collibra, or Atlan. - Management of Enterprise Glossary, configuration of Collibra/Alation data catalog resources, data lineage, custom resources, relationships, data domains, and more. - Implementing Critical Data Elements to govern, corresponding Data Quality rules, policy, regulation, roles, Users, data source systems, dashboard/visualization for multiple data domains. - Good to have experience in administration and management of Collibra/Alation data catalogue tool, configuration of Data profiling and data lineage, and working with various stakeholders to understand Catalogue requirements and configure it in the tool. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models, optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles: - Flexible work arrangements, Free spirit, and emotional positivity. - Agile self-determination, trust, transparency, and open collaboration. - All Support needed for the realization of business goals. - Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 month ago

Apply

8.0 - 13.0 years

0 Lacs

karnataka

On-site

You will be joining MRI Software as a Data Engineering Leader responsible for designing, building, and managing data integration solutions. Your expertise in Azure Data Factory and Azure Synapse analytics, as well as data warehousing, will be crucial for leading technical implementations, mentoring junior developers, collaborating with global teams, and engaging with customers and stakeholders to ensure seamless and scalable data integration. Your key responsibilities will include leading and mentoring a team of data engineers, designing and implementing Azure Synapse Analytics solutions, optimizing ETL pipelines and Synapse Spark workloads, and ensuring data quality, security, and governance best practices. You will also collaborate with business stakeholders to develop data-driven solutions. To excel in this role, you should have 8-13 years of experience in Data Engineering, BI, or Cloud Analytics, with expertise in Azure Synapse, Data Factory, SQL, and ETL processes. Strong leadership, problem-solving, and stakeholder management skills are essential, and knowledge of Power BI, Python, or Spark would be advantageous. Deep knowledge of Data Modelling techniques, ETL Pipeline development, Azure Resources Cost Management, and data governance practices will also be key to your success. Additionally, you should be proficient in writing complex SQL queries, implementing best security practices for Azure components, and have experience in Master Data and metadata management. Your ability to manage a complex business environment, lead and support team members, and advocate for Agile practices will be highly valued. Experience in change management, data warehouse architecture, dimensional modelling, and data integrity validation will further strengthen your profile. Collaboration with Product Owners and data engineers to translate business requirements into effective dimensional models, strong SQL skills, and the ability to extract, clean, and transform raw data for dimensional modelling are essential aspects of this role. Desired skills include Python, real-time data streaming frameworks, and AI and Machine Learning data pipelines. A degree in Computer Science, Software Engineering, or related field is required for this position. In return, you can look forward to learning leading technical and industry standards, hybrid working arrangements, an annual performance-related bonus, and other benefits that foster an engaging, fun, and inclusive culture at MRI Software. MRI Software is a global Proptech leader dedicated to empowering real estate companies with innovative applications and hosted solutions. With a flexible technology platform and a connected ecosystem, MRI Software caters to the unique needs of real estate businesses worldwide. Operating across multiple regions, MRI Software boasts nearly five decades of expertise, a diverse team of over 4000 professionals, and a commitment to Equal Employment Opportunity.,

Posted 1 month ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies